They support Claude, ChatGPT, Gemini, HuggingChat, and Mistral.

  • ocassionallyaduck@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    4 hours ago

    Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.

    The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.

    And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.

    Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.

    I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.

    • Lojcs@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      11 minutes ago

      Last time I tried using a local llm (about a year ago) it generated only a couple words per second and the answers were barely relevant. Also I don’t see how a local llm can fulfill the glorified search engine role that people use llms for.

    • ilhamagh@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      Can you point me to some resources to running smol llm?

      My use case prob just to help “typing” miscellaneous idea I have or check for my grammatical error, in english.

      Thanks, in advance.