Available online as in, you just log in to a website and use it, not on hugging face or github, where you need to download, install and configure.

LLMs are already made so “safe” that they won’t even describe an erotic or crime story - content you would easily find visually represented in all its detail on Netflix, Amazon, HBO, youtube, etc. Ie writing “Game of Thrones” with an AI is not possible in most chat bots anymore.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    21
    arrow-down
    1
    ·
    3 days ago

    DuckDuckGo currently provides free access to four different LLMs. They say they don’t store user conversations, but I’m not sure I trust that, or that that won’t change at some point even if they don’t right now.

    Most of them have the strawberry problem (or some variant where that word is explicitly patched(?)), fail basic arithmetic and apologise repeatedly and often without being able to better themselves when mistakes are pointed out. Standard LLM fare for 2024/5.

  • pepperprepper@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    3
    ·
    3 days ago

    Very easy to run yourself theses days as long as you have a decent GPU or Mac unified silicon. OpenWebUi will let you host your own ChatGPT that you can choose different models to run. Dolphin Mixtral is a pretty popular “unlocked” model.

  • breakingcups@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    3 days ago

    There’s a few, some of them have a free tier, most of them are paid. They are often geared towards role play or storytelling. If you search for uncensored roleplaying AI I’m sure you’ll find some.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    I don’t know which one of them is good, but I’ve seen like a dozen or so online services, mostly for roleplay / virtual girl/boyfriend stuff etc. They’re paid, though. Or you can pay openrouter (more general LLM connector, also paid). I’m not sure if you’re looking for something like that or something free. They’re definitely out there and available to the public.

    It’s mostly OpenAI, Microsoft etc who have free services, but they’re limited in what they’ll talk about. And there is one free community project I’m aware of: that would be AI Horde. It’s mostly for images but offers text, too. I haven’t used it in a while, not sure how/if it works.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    3 days ago

    You are right that something that most others will host for free are going to be censored since otherwise they might have some kind of responsibility legally. I learned this while trying to diagnose an issue with my cars door lock.

    At the end of the day, anything you ask some hosted llm is being recorded so if you actually want something uncensored or something that gives you a sense of freedom then the only real option is to self host.

    Luckily, it’s very simple and can even be done on a low spec device if you pick the right model. The amount and type of ram you have a will dictate how many parameters you can run at a decent speed.

    Here’s 3 options with increasing difficulty (not much however) written for Arch Linux: https://infosec.pub/comment/13623228

  • xmunk
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    Not certain if it’s still active but koboldai seems to be a community sourced tool that doesn’t have built-in limitations because it’s a non-commercial site.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    Was about to post a Hugging Face link til I finished reading. For what it’s worth, once you have Ollama installed it’s a single command to download, install, and immediately drop into a chat with a model, either from Ollama’s library or Hugging Face, or anyone else. On Arch the entire process to get it working with gpu acceleration was installing 2 packages then start ollama.