• umbrella@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    10 months ago

    tbf you would need a pretty beefy gpu to do both rendering and ai locally.

    as much as i hate to say it (because this idea sounds awesome) the tech is not there yet, and depending on the cloud for this always goes wrong.

    • cynar@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      I limited LLM would run on a lot of newer gfx cards. It could also be done as a semi online thing. If you have the grunt, you can run it locally. Otherwise, you can farm it out to the online server.