I tried doing this using Automatic1111 and some of my favourite custom checkpoints from Civitai. The result was garbage. I can train beautiful embeddings using the base SD 1.5 model, but not using any of my favourite checkpoints.
Embeddings should generally be trained on base models to improve compatibility with models derived from the base. For SD 1.5, that means using either regular SD 1.5 or the NovelAI leak. You can sometimes get away with using more “basic” models that don’t have many merges, but that can be tough to gauge.
Thanks! Isn’t it better to train the embeddng with the model I expect to use it with?
I don’t really understand the science behind it, but in my experience I’ve had much more success using basic models for training.
Also, I’ve found that LoRAs are generally much easier and faster to train than embeddings. Is there a reason you’re going for an embedding over a LoRA?