I have been running 1.4, 1.5, 2 without issue - but everytime I try to run SDXL 1.0 (via Invoke, or Auto1111) it will not load the checkpoint.

I have the official hugging face version of the checkpoint, refiner, lora offset and VAE. They are all named properly to match how they need to. They are all in the appropriate folders. When I pick the model to load, it tries for about 20 seconds, then pops a super long error in the python instance and defaults to the last model I loaded. Oddly, it loads the refiner without issue.

Is this a case of my 8gb vram just not being enough? I have tried with the no-half/full precision arguments.

  • whitecapstromgard
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    SDXL is very memory hungry. Most base models are around 6-7 GB, which doesn’t leave much room for anything else.

    • Thanks4Nothing@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Thanks. Oddly enough, the most recent release of InvokeAI fixed the problem I was having. My 8gb 3070 can run SDXL in about 30 seconds now. It seems to take a little bit to clear everything in-between generations though. I want to move up to a 12/24 gb GPU, but am waiting/hoping for the price crash.