• Gork@lemm.ee
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    1
    ·
    10 months ago

    Are there any Open Source girlfriends that we can download and compile?

    • herrcaptain@lemmy.ca
      link
      fedilink
      English
      arrow-up
      57
      ·
      10 months ago

      Hey now, I don’t want anyone looking at my girlfriend’s source code. That’s personal!

      • demonsword@lemmy.world
        link
        fedilink
        English
        arrow-up
        30
        ·
        10 months ago

        I don’t want anyone looking at my girlfriend’s source code

        it’s okay, dude, we all already did…

      • Gork@lemm.ee
        link
        fedilink
        English
        arrow-up
        19
        ·
        10 months ago

        Does it make it faster if the GPU has waifu stickers on it?

      • SwampYankee@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        10 months ago

        Basically, the more vram you have, the better the contextual understanding, their memory is. Otherwise you’d have a bot that maybe knows to only contextualize the last couple messages.

        Hmm, if only there was some hardware analogue for long-term memory.

          • SwampYankee@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I guess I’m wondering if there’s some way to bake the contextual understanding into the model instead of keeping it all in vram. Like if you’re talking to a person and you refer to something that happened a year ago, you might have to provide a little context and it might take them a minute, but eventually, they’ll usually remember. Same with AI, you could say, “hey remember when we talked about [x]?” and then it would recontextualize by bringing that conversation back into vram.

            Seems like more or less what people do with Stable Diffusion by training custom models, or LORAs, or embeddings. It would just be interesting if it was a more automatic process as part of interacting with the AI - the model is always being updated with information about your preferences instead of having to be told explicitly.

            But mostly it was just a joke.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      10 months ago

      Pretty easy to roll your own with Kobold.cpp and various open model weights found on HuggingFace.

      • TipRing@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        Also for an interface, I’d recommend KoboldLite for writing or assistant and SillyTavern for chat/RP.

        • exu@feditown.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.