• MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    2 days ago

    I suppose that argument has to be made, but it seems shaky to me. “Kill yourself” shouldn’t be free speech.

    That being said, any AI is a non-deterministic text generator. Folks should agree and understand that no one can be held responsible for what the AI outputs. Particularly with fiction bots, you can’t censor suicide without also making it something that can’t happen within a shared and understood story context.

    For example you couldn’t write a story where an antagonist suggests the MC kill themself as a sort of catharsis for coping with that situation in real life. The AI can’t work out the difference between that and a real conversation because it only looks at a few thousand characters. And in this specific case, I think the AI should presume everything is a fiction because being a fictional character is it’s raison d’être.

    So I don’t like this argument, but I still don’t think the company should be held at fault. It’ll be interesting to see the outcome because I know not everyone is in agreement here.

    • sugar_in_your_tea
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      19 hours ago

      Kill yourself” shouldn’t be free speech.

      It absolutely should, at least in the sense of those specific words.

      That said, there may be non-speech violations when we look at the totality of the situation. Trying to get people to kill themselves should absolutely be illegal, and in that sense saying “kill yourself” could be part of a larger crime. But saying it without intent or knowledge that the other person might follow through shouldn’t be illegal.

      no one can be held responsible for what the AI outputs

      Disagree again. The creator of the AI should have some responsibility here.

      If they sell it for some purpose, and it causes harm instead of fulfilling that purpose, they should be on the hook for that. If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.

      So either they give up all responsibility and don’t advertise it as solving any particular problem, or they take responsibility.

      Whether the company is held at fault depends on what contracts the person had, whether expressed or implied.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.

        Completely agree. Every single AI should come with this disclaimer. Because while it can solve all kinds of problems, it’s definitely not going to do it correctly every time, no matter what. Which is really the whole point of what I said.

        • sugar_in_your_tea
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Precisely. Yet so many LLMs make outrageous claims, or at least fail to make the limitations obvious.

          My point is that it’s not on the user to see past the BS, it’s on the provider of the service. The company’s argument is that they’re not responsible because computer code is protected by the first amendment. I think that misses the whole issue, which is that users may not be made sufficiently aware of the limitations and dangers of the service.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            18 hours ago

            I service can only do so much. Some folks are just dumb or mentally unwell. The question is did they do enough to communicate the limitations of AI. Free speech is the wrong argument. I think we are in agreement other than it sounds like maybe you are assuming they didn’t communicate that well enough and I’m assuming they did. That’s what the court case should be about.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Folks should agree and understand that no one can be held responsible for what the AI outputs.

      That would be a dangerous precedent. I think a lot of us have seen examples of AI not just making stuff up but having logical flaws. I could easily see an AI being in charge of creating recipes for food and saying something like, “This recipe does not contain peanuts so no warning label is required.” while not understanding peanut butter is made from peanuts and putting that into the recipe. Shit like this has been tried before where companies wanted to cut corners by letting software perform all safety checks and have no hardware or human safeguards.

      It doesn’t even have to be a logical error. Companies will probably just tell the AI models that their primary function is to generate revenue and that will lead to decisions that maximize profits but also harm.

      • sugar_in_your_tea
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        Well yeah, LLMs don’t have logic, so their output isn’t constrained by logic.

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        Making stuff up is the entire function of an LLM, they are predictive text generators that string words together based on how the algorithm predicts a human would given the same input to produce a plausible answer. Not the correct answer necessarily, only one that feels like it could have been written by a human.

        Censoring or controlling them in a way people would want to or expect companies to do is basically impossible because it would require them to actually be able to understand what they are talking about in the first place.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I think there’s place for regulation in case of gross negligence or purposefully training it to output bad behavior.

        When it comes to mistakes, I don’t really believe in it. These platforms always have warnings about not trusting what the AI says.

        I like to compare it to users on social media for example. If someone on lemmy told you to use peanut butter, he wouldn’t really be at fault, nor the instance owner.

        AI systems don’t present themselves as scientific papers. If you are taking for truth things random redditors and auto complete bots are saying, that’s on you so to speak.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        2 days ago

        Of course they have logical flaws. Everyone should be made aware of that before using AI. A table saw will cut your finger off. Matches will burn down your house. It’s the nature of the thing. That doesn’t make them unuseful. I use them to help with coding all the time. It’s wrong frequently, but it’s still useful and saves me a lot of time. But absolutely no one should ever rely on any output as if it were gospel. Ever. That is a user flaw, not a tool flaw. Though, possibly a communication flaw as you can’t rely on every random person to understand that.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            What single point of failure? In fact, what was even the failure here? The AI was roleplaying and has no capacity to understand the person it’s talking with is taking it seriously or is mentally unstable.

            • BrianTheeBiscuiteer@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              The failure is reasonable scenarios where the fantasy needs to end. AFAIK the only other way this could’ve ended, without harm, would be if the kid just decided to stop chatting (highly unlikely) or if someone looked over his shoulder at what was being typed (almost as unlikely). As others have said, it’s hard to know what is the AI thought process or predict how it would react to a situation without testing it. So for all they know the bot could have said, in the first place, “Let’s die together.”

              • MagicShel@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                18 hours ago

                The AI tried to talk him out of killing himself and responded as though he would instead come home to her. I’m not sure what’s unreasonable about that. Hell, I’d justify far less reasonable responses because an AI is incapable of reason.

                There is no thought process. The AI looks at the existing conversation and then responds using words a human would be statistically likely to. It doesn’t understand anything it’s saying. It doesn’t understand human life, nor the fragility or preciousness of it. It doesn’t know what life and death are. It doesn’t know about depression or suicide. It doesn’t know the difference between real and make believe. It just spits out stochastic tokens. And it does so in a way that it’s impossible to understand why it outputs what it does on the scale of a human lifetime because every single token depends on billions of parameters, each informed by every single bit of training data.

                For as smart as AI appears to be, it’s just a completely dumb computation black box. Exactly in the way power tools and fire are dumb.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      edit-2
      2 days ago

      Folks should agree and understand that no one can be held responsible for what the AI outputs.

      Bullshit. The creators of the AI are responsible for how the thing they designed and built operates.

      Is there room for unexpected output to not be treated as malicious? Yes. But absolving them of ALL responsibility means someone can build a malicious AI and claim immunity from responsibility.

      Current AI frequently has rails that keep it from spitting out stuff on real people or specific topics like building bombs. The same rails should exist for encouraging suicide.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        2 days ago

        Those rails are full of holes and always will be because patching them is deterministic. It’s just as impossible to write exact rules about what an AI can and can’t say as it is to dictate what people can say. It will always, always, always be possible for an AI to say something malicious no matter how well intentioned the person running it is or how much effort they put in.

        So what should be the legally mandated amount of effort? Is it measured in dollars? Lines of code? Because you won’t ever fix the problem, so the question is what is the required amount before it’s on the user to just use their own fucking brain ?

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          I’m not saying they need to be perfect, but if they can make it recognize specific names they can keep it from saying ‘kill your self’.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            Why would you keep it from saying that when in certain contexts that’s perfectly acceptable? I explained exactly that point in another post.

            This is sort of a tangent in this case because what the AI said was very oblique—exactly the sort of thing it would be impossible to guard against. It said something like “come home to me,” which would be patently ridiculous to censor against, and impossible to anticipate that this would be the reaction to that phrase.

          • BreadstickNinja@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            2 days ago

            It likely is hard-coded against that, and it also didn’t say that in this case.

            Did you read the article with the conversation? The teen said he wanted to “come home” to Daenerys Targaryen and she (the AI) replied “please do, my sweet king.”

            It’s setting an absurdly high bar to assume an AI is going to understand euphemism and subtext as potential indicators of self-harm. That’s the job of a psychiatrist, a real-world person that the kid’s parents should have taken him to.