I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’m not sure why, but I have 8GB vram and my experience with this has been the same as others who describe that SDXL will not run with Auto1111 but it will work with ComfyUI. So I think this is not purely a vram issue.

    • Thanks4Nothing@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah it’s very odd. I tried comfyUI. I but the interface just doesn’t click with me.

      I keep waiting for invoke AI to have an auto installer for that model but they are still only offering the SDXL .9 and I don’t have a token for that model.

    • whitecapstromgard
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Auto1111 might be trying to load multiple models at the same time, which it does not have room for.