• mm_maybe
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I really wish it were easier to fine-tune and run inference on GPT-J-6B as well… that was a gem of a base model for research purposes, and for a hot minute circa Dolly there were finally some signs it would become more feasible to run locally. But all the effort going into llama.cpp and GGUF kinda left GPT-J behind. GPT4All used to support it, I think, but last I checked the documentation had huge holes as to how exactly that’s done.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Still perfectly runnable in kobold.cpp. There was a whole community built up around with Pygmalion.

      It is as dumb as dirt though. IMO that is going back too far.