Hey, all:

I’m pretty new to AI image creation. I’ve mucked around in Dalle and gotten some decent results but recently found Fal.ai.

My question is related to Lora models. While get how to call them generally ex: lora:dark_fantasy:1.3, I can’t seem to get them to work at all here

https://fal.ai/models/fal-ai/flux-general

There is an overwhelming amount of websites and models and blah blah out there and I could really use a crash course in how to use these models.

Also, how do I train my own?

  • Track_Shovel@slrpnk.netOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Yeah, I’m pretty much thinking using pre-trained models to generate images in SD. I’m hugely inexperienced at all of this, and very lost. There are a bunch of devs/power users who have gone to the nines on writing this stuff and guiding other devs/super users but for a schmoe like me, it may as well be in Sanskrit.

    A starting point on ‘hey, here’s how to use SD, and a few models it has’ would be very much appreciated. I’ve been trying to get lora models working but no luck

    • Ziggurat
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Do you have a PC with a gaming GPU? If yes, you can run comfyui, automatic1111 or easy diffusion on your PC. It’s not that complicated (but require to be familiar with installing/configuring a software. Which considering lemmy’s audience is most likely the case)

      If not there is tons of online services, some free (with restrictions), some with subscription, some with paying image credits. And you can check the faq of each services for details. Not all online SD services let you use lora, don’t ask me why

    • Scipitie@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I’d try chat gpt for that! :)

      But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I’m not familiar with it if there is.

      First, familiarize yourself with server setups. If you only want to host this you won’t have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.

      One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I’ve tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.

      Once you can understand this whole port, container, path mapping and environment variable things.

      Then it’s going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.