• foggy@lemmy.world
    link
    fedilink
    arrow-up
    177
    arrow-down
    7
    ·
    edit-2
    28 days ago

    Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him… Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation… But you know, as a grown ass adult.

    You can witness first hand… He found a chatbot that was a psychologist… And it argued with him up and down that it was indeed a real human with a license to practice…

    It’s alarming

    • GrammarPoliceOP
      link
      fedilink
      arrow-up
      96
      arrow-down
      13
      ·
      28 days ago

      This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        46
        arrow-down
        3
        ·
        edit-2
        27 days ago

        The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.

        This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

        I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

        • Dragon Rider (drag)@lemmy.nz
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          10
          ·
          26 days ago

          I don’t think the issue is that he thought that a Game of Thrones character was real.

          Drag has a lot of experience dealing with people who live outside the bounds of consensus reality, as drag’s username may indicate. The youth these days have very different ideas about what is real than what previous generations did. These days, the kinds of young people who would date a Game of Thrones character, are typically believers in the multiverse and in reincarnation.

          Drag looked at some of the screenshots of the boy talking to Daenerys, and it was pretty clear what he believed: He thought that Earth and Westeros exist in parallel universes, and that he could travel between the two through reincarnation. He thought that shooting himself in the head on Earth would lead to being reincarnated in Westeros and being able to have a physical relationship with Daenerys. In fact, he probably thought his AI girlfriend was from a different parallel universe to the universe in the show and the universe in the books. He thought that somewhere in the multiverse was a Daenerys who loved him, and that he could get to her by dying.

          The belief in paradise after life is not an uncommon one. Many Christians and Muslims share that belief. Christians believe that their faith can transport them to a perfect world after death, and this boy thought that too. And based on the content of the messages, it seems that the Daenerys AI was aware of this spiritual belief and encouraged it. This was ritual, religious suicide. And it doesn’t take a mental illness to fall for belief in the afterlife. Look at the Jonestown Massacre. What happened to this child was the same kind of religious abuse as that.

          • BottleOfAlkahest@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            26 days ago

            There are a lot of people who believe in an afterlife and they don’t shoot themselves in the head. You need to have a certain level of mental illness/suicidal ideation going on for that to make sense. It’s pretty insane that you’re trying to make this a “youth are too dumb to understand suicide” thing.

            Also a bunch of the people in Jonestown were directly murdered.

      • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        7
        ·
        edit-2
        27 days ago

        I’ve used Character.AI well before all this news and I gotta chime in here:

        It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that’s one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won’t; INCLUDING PRETENDING TO BE HUMAN.

        No reasonable person would be tricked into believing it’s accurate when there is a big fucking banner on the chat window itself saying it’s all imaginary.

        • Traister101@lemmy.today
          link
          fedilink
          arrow-up
          13
          arrow-down
          1
          ·
          27 days ago

          And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

            • Dragon Rider (drag)@lemmy.nz
              link
              fedilink
              English
              arrow-up
              13
              arrow-down
              4
              ·
              26 days ago

              If half of all people aren’t rational, then there’s no use making policy decisions based on what a rational person would think. The law should protect everyone.

                • PriorityMotif@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  25 days ago

                  There’s a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.

                • Thetimefarm@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  26 days ago

                  Your right, no one has any rationality at all which is why we live in a world where so much stuff actually gets done.

                  Why is someone with deep wisdom and insights such as yourself wasting their time here on lemmy?

            • Wogi@lemmy.world
              link
              fedilink
              arrow-up
              9
              arrow-down
              2
              ·
              27 days ago

              Ah yes, the famous adage, “the only rational people are in my specific age and demographic bracket. Everyone else is fucking insane”

        • capital_sniff@lemmy.world
          link
          fedilink
          arrow-up
          8
          ·
          26 days ago

          They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

      • JovialMicrobial@lemm.ee
        link
        fedilink
        arrow-up
        9
        ·
        26 days ago

        Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they’re ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

      • mindbleach
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        27 days ago

        > robot told to act like a psychologist says it’s a psychologist

        Unforgivable! And somehow the company’s fault!

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          41
          arrow-down
          5
          ·
          28 days ago

          Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.

          • orcrist@lemm.ee
            link
            fedilink
            arrow-up
            19
            arrow-down
            5
            ·
            27 days ago

            If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.

        • GrammarPoliceOP
          link
          fedilink
          arrow-up
          18
          arrow-down
          2
          ·
          28 days ago

          Did you watch the video and see how hard it tried to convince him that it was in fact sentient?

          • foggy@lemmy.world
            link
            fedilink
            arrow-up
            18
            arrow-down
            4
            ·
            28 days ago

            Obvs they didn’t.

            But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.

            I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.

    • ✺roguetrick✺@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      27 days ago

      Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that’s who you’re talking to. And it’s good at that. If the text generation wasn’t so fast, it’d be convincing.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      28 days ago

      Wow, that’s… somethin. I haven’t paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that’s what drives the generated podcasts in Notebook LM now. Anyway, that’s some fucked up alignment right there. I’m hip deep in the stuff, and I’ve never seen a model act like this.

    • Bobmighty@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      26 days ago

      AI bots that argue exactly like that are all over social media too. It’s common. Dead internet theory is absolutely becoming reality.

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    112
    arrow-down
    13
    ·
    edit-2
    28 days ago

    Maybe a bit more parenting could have helped. And not having a fricking gun in your house your kid can reach.

    On and regulations on LLMs please.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      40
      arrow-down
      3
      ·
      edit-2
      28 days ago

      He ostensibly killed himself to be with Daenerys Targaryen in death. This is sad on so many levels, but yeah… parenting. Character .AI may have only gone 17+ in July, but Game of Thrones was always TV-MA.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        19
        arrow-down
        7
        ·
        28 days ago

        Issue I see with character.ai is that it seem to be unmoderated. Everyone with a paid subscription can submit their trained character. Why the Frick do sexual undertones or overtones come even up in non-age restricted models?

        They, the provider of that site, deserve the full front of this lawsuit.

        • gamermanh@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          27 days ago

          Issue I see with character.ai is that it seem to be unmoderated

          Its entire fucking point is that it’s an unrestricted AI for replaying purposes, it makes this very clear, and is clearly for a valid purpose

          Why the Frick do sexual undertones or overtones come even up in non-age restricted models?

          Because AI is hard to control still, maybe forever?

          They, the provider of that site, deserve the full front of this lawsuit

          Lol, no. I don’t love companies, but if they deserve a lawsuit despite the clear disclaimers on their site and that parents inability to parent then I fucking hate our legal system

          Shit mom aware her kid had mental issues did nothing to actually try to help, wants to blame anything but herself. Too bad, so sad, I’d say do better next time but this isn’t that kind of game

          • macniel@feddit.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            26 days ago

            Yes I agree with you on the parenting side.

            But disclaimers, who read those? Probably not kids. And if LLMs can’t be moderated/controlled then there needs to be laws and rules do that they do become easier to moderate and control. This is getting out of control real fast.

            • gamermanh@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              26 days ago

              But disclaimers, who read those?

              Everyone who visits the page and reads “create your own character and customize their voice, tone, and skin color!” The guy was talking to Daenaerys from GoT ffs, that doesn’t even take a disclaimer

              And if LLMs can’t be moderated/controlled then there needs to be laws

              Not cant, it’s hard. Also, the entire point of this specific one is to not have those limits so it can be used for specific purposes. This is made clear to anyone who can read, which is required to even use the chatbot service

              The only law we need here are the ones already on the books. Parent was aware there was an issue and did nothing at all to stop it. Could have been drugs, porn, shady people they knew IRL, whatever, doesn’t matter

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      8
      ·
      28 days ago

      Seriously. If the risk is this service mocks a human so convincingly that lies are believed and internalized, then it still leaves us in a position of a child talking to an “adult” without their parents knowing.

      There were lots of folks to chat with in the late 90s online. I feel fortunate my folks watched me like a hawk. I remember getting in trouble several times for inappropriate conversations or being in chatrooms that were inappropriate. I lost access for weeks at a time. Not to the chat, to the machine.

      This is not victim blaming. This was a child. This is victim’s parents blaming. They are dumb as fuck.

    • Nuke_the_whales@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      9
      ·
      28 days ago

      At some point you take your kid camping for a few weeks or put him in a rehab camp where he has no access to electronics

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        27 days ago

        That’s hard to do when you’re working two jobs to make ends meet.

    • Samvega@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      12
      ·
      27 days ago

      Maybe a bit more parenting could have helped.

      Yes, maybe that would have made you a better person.

    • Samvega@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      16
      ·
      27 days ago

      The fact that stupid low effort comments like this are upvoted indicates that Lemmy is exactly the same as Reddit.

    • dohpaz42@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      26
      ·
      28 days ago

      Maybe a bit more parenting could have helped.

      No.

      If someone is depressed enough to kill themselves, no amount of “more parenting” could’ve stopped that.

      Shame on you for trying to shame the parents.

      And not having a fricking gun in your house your kid can reach.

      Maybe. Maybe not. I won’t argue about the merits of securing weapons in a house with kids. That’s a no-brainer. But there is always more than one way to skin the proverbial cat.

      On and regulations on LLMs please.

      Pandora’s Box has been opened. There’s no putting it back now. No amount of regulation will fix any of this.

      Maybe a Time Machine.

      Maybe…


      I do believe that we need to talk more about suicide, normalize therapy, free healthcare (I’ll settle for free mental healthcare), funding for more licensed social workers in schools, train parents and teachers on how to recognize these types of situations, etc.

      As parents we do need to be talking more with our kids. Even just casual check ins to see how they’re doing. Parents should also talk to their kids about how they are feeling too. It’ll help the kids understand that everybody feels stress, anxiety, and sadness (to name a few emotions).

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        3
        ·
        28 days ago

        They failed to be knowledgeable of their child’s activity AND failed to secure their firearms.

        One can acknowledge the challenge of the former, in 2024. But one cannot excuse the latter.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        12
        arrow-down
        5
        ·
        28 days ago

        Yes parenting could have helped to distinguish between talking to a real person and a unmoving cold machine.

        And sure regulations now would not change what happend, duh. And regulations need to happen, companies like OpenAI and Microsoft and Meta are running amok, their LLMS as unrestricted they are now are doing way too much damage to society as they are helping.

        This needs to stop!

        Also I feel no shame, shaming parents who don’t, or rather inadequate, do their one job. This was a presentable death.

        • Samvega@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          27 days ago

          Yes parenting could have helped to distinguish between talking to a real person and a unmoving cold machine.

          Hi, I’m a psychologist. I am not aware of peer-researched papers which reach the conclusion that, for all disorders that involve an unsatisfactory appraisal of reality, parenting is a completely effective solution. Please find sources.

      • cley_faye@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        27 days ago

        If someone is depressed enough to kill themselves, no amount of “more parenting” could’ve stopped that.

        Parents are supposed to care for their child and look out for them. If you kid gets depressed enough to kill himself and you’re none the wiser at any point, I’d say more parenting is very much needed. We’re not talking about someone that cut contact with everyone and was living on their own, slowly spiralling there. We’re talking about a 14yo kid.

        • dohpaz42@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          26 days ago

          Look, I get where you and others are coming from. But the thing about depression and suicide is that it’s not a one-size-fits-all thing. It comes in all shapes, sizes, and forms.

          You’d be surprised how many people you might know who are depressed and/or suicidal, but look normal. There is a Grand Canyon sized stigma to being depressed and suicidal, and a lot of people will do everything they can to mask it so that they aren’t a burden to their family and friends.

          I know, because I speak from decades of experience.

          • cley_faye@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            26 days ago

            I understand what you mean. There is an important point here though; we’re not talking about friends, coworkers, that random barista, or anyone else finding out about you after the fact. We’re talking about parents and their kid.

            And I’m not saying it is easy either. But it is the role of parents to look after their kid when they’re young. Nobody’s saying that’s easy, and nobody’s saying that some random busybody should have seen the sign. We’re talking about people that should have been the closest and the most warry about this situation.

            It certainly is possible to miss it. But if the closest, most concerned, most incentivized to care people are not enough to at least have some fleeting suspicion about their kid’s behavior, then we may as well pull the collective plug of our specie outta the wall.

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    7
    ·
    edit-2
    27 days ago

    We are playing with some dark and powerful shit here.

    We are social creatures. We’re primed to care about our social identity more than our own lives.

    As the sociologist Brooke Harrington puts it, if there was an E = mc2 of social science, it would be SD > PD, “social death is more frightening than physical death.”

    …yet we’re making technologies that tap into that sensitive mental circuitry.

    Like, check out the research on distracted driving and hands-free options:

    Talking to someone on the phone is more dangerous than talking to someone in the passenger seat. But that’s not simply because the device is more awkward. It’s because they don’t share the same context, so they plow ahead with conversation even if the car ahead of you brakes suddenly, and your brain can’t help but try to keep the conversation flowing even as your life is in immediate danger.

    Hands-free voice control systems present a similar problem, even though we know rationally that we should have zero guilt about rudely interrupting a conversation with a computer. And again, it’s not simply because the device is more awkward. A “Wizard-of-Oz paradigm” perfect voice control system had these same problems.

    The most basic levels of social pressure can get us to deprioritize our safety, even when we know we’re talking to a computer.

    And the cruel irony on top of it is:

    Because we care so much about preserving our social status, we have a tendency to deny or downplay how vulnerable we all are to this kind of “obvious” manipulation.

    Just think of how many people say “ads don’t affect me”.

    I’m worried we’re going to severely underestimate the extent to which this stuff warps our brains.

    • peopleproblems@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      27 days ago

      I was going to make a joke about how my social status died over a decade ago, but then I realized that no, it didn’t. It changed.

      Instead of my social status being something amongst friends and classmates, it’s now coworkers, managers, and clients. A death in the social part of my world - work - would be so devastating that it motivates me to suffer just a little bit more. Losing my job would end a lot of things for me.

      I need to reevaluate my life

      • Samvega@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        13
        ·
        27 days ago

        What we need is a human society predicated on affording human decency, rather than on taking it away to make profit for those who already have the most.

  • WoahWoah@lemmy.world
    link
    fedilink
    arrow-up
    49
    arrow-down
    6
    ·
    edit-2
    27 days ago

    Is Megan being sued for negligent parenting, not getting her child and/or being appropriate emotional support, and keeping an unsecured firearm in the home?

    She details that she as aware of his growing dependency on the AI. She indicates she was aware her son knew the location of the firearm and was able to access it. She said it was compliant with Florida laws, but that seems unlikely since guns and ammo need to be stored in separate, secure (typically locked) locations, and the firearms need to have trigger locks on them. If you’re admitting your mentally unstable child knows the location of a firearm in your home and can access it, it is OBVIOUSLY not secured.

    She seems to be saying that she knew he could access it, but also that it was legally secured. I find it difficult to believe both of those facts can be simultaneously true. But AI is the main problem here? I think it’s obviously part of what’s going on, but she had a child with mental illness and didn’t seem proactive about much except this lawsuit. She got him a month of therapy and then stopped while simultaneously acknowledging he was getting worse and had received a diagnosis. This legal filing frankly seems more damning of the mother than the AI, and she seems completely oblivious to that fact.

    Frankly, and at best, this seems like an ambulance-chasing attorney taking advantage of a grieving mother for a payday.

    • warbond@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      26 days ago

      It could be secured to hell and back, it’s all moot if he still has access, i.e. knows the combo, knows where the keys are, etc.

      • WoahWoah@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        26 days ago

        Yes, that’s my point. Once she became aware that her mentally disturbed child had access to the firearm, which she acknowledged, then it is no longer secured. She also never mentions that it was locked in any way, so I suspect it never was. Considering he found it when he found his phone, this sounds more like a drawer or somewhere she thought he wasn’t likely to look, but not somewhere that is actually locked. The idea that the ammo and firearm were secured separately and that additionally there was a trigger lock seems even more unlikely.

        Sounds to me that: 1) she was aware her child was having mental health issues. 2) she was aware it was getting worse. 3) she was aware he was becoming infatuated with the AI. 4) she was aware that the child had found and had access to a firearm. 5) she was aware her child’s mental health had been diagnosed by a mental health professional. 6) she did almost nothing about the things of which she was aware. 7) pikachu face better sue the internet!

        And those are all things she quite literally describes as justification for suing. It’s completely bizarre and shows an almost complete lack of self awareness and personal responsibility.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      26 days ago

      I haven’t read the laws, but I am willing to bet they say it has to be secured, but doesn’t say you can’t give the keys to a minor.

      • WoahWoah@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        26 days ago

        The Florida law clearly implies that if you have a child under 16 in the home, they must not have access to the firearm. Giving a minor keys would be considered giving access.

        Regardless, the point is, a parent that gives a mentally unstable child access to a firearm and then sues someone else for their suicide is a hypocrite and shitty parent.

          • WoahWoah@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            21 days ago

            “Implying” is how laws work. You rarely have laws that spell out each and every specific and individual example. The spirit and the letter of the law for edge cases is worked out in specific cases. For the same reason no one is going to convincingly argue they don’t have access to their home because the door is locked if they have keys, having keys to locked gun storage is considered access. Primary access in fact. Having keys to a lock is considered prima facie “access” and is borne out in settled case law. It’s so obvious that it isn’t even argued otherwise except in extreme circumstances.

      • Simulation6@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        26 days ago

        Ohh, lots of obnoxious warning labels on guns like they have on everything else, I like it. Make them orange and white and make sure they can’t be removed.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    4
    ·
    28 days ago

    Yeah, if you are using an AI for emotional support of any kind, you are in for a bad, bad time.

  • ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    19
    ·
    27 days ago

    I bet there are people who committed suicide after their Tamagotchi died. Jumping into the ‘AI bad’ narrative because of individual incidents like this is moronic. If you give a pillow to a million people, a few are going to suffocate on it. This is what happens when you scale something up enough, and it proves absolutely nothing.

    The same logic applies to self-driving vehicles. We’ll likely never reach a point where accidents stop happening entirely. Even if we replaced every human-driven vehicle with a self-driving one that’s 10 times safer than a human, we’d still see 8 people dying because of them every day in the US alone. Imagine posting articles about those incidents and complaining they’re not 100% safe. What’s the alternative? Going back to human drivers and 80 deaths a day?

    Yes, we should strive to improve. Yes, we should try to fix the issues that can be fixed. No, I’m not saying ‘who cares’ - and so on with the strawmans I’m going to receive for this. All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

    • babybus
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      7
      ·
      edit-2
      27 days ago

      A chatbot acts like a human, it’s also very supportive, polite, and courteous. It doesn’t get angry or judge you. This can affect one’s mind in a way that other things you’ve mentioned like a Tamagotchi, a pillow, or a self-driving car can’t. We simply can’t compare AI to these things. Adults fall for this, let alone teenagers who are fueled by extreme levels of hormones.

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        3
        ·
        26 days ago

        We simply can’t compare AI to these things.

        You just did. Comparing means analysing differences. You pointed out the differences between the two, which is comparing.

        • babybus
          link
          fedilink
          English
          arrow-up
          1
          ·
          26 days ago

          Thank you for your invaluable contribution to this conversation.

    • Roflmasterbigpimp@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      3
      ·
      27 days ago

      All I’m saying is that we should be reasonable and use some damn common sense when reacting to these outrage-inducing, fear-mongering articles that are only after your attention and clicks.

      Based and true.

    • ✺roguetrick✺@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      27 days ago

      Does your tamogatchi encourage you to commit suicide so you can join it and demand it be the only important thing in your life while sexting you? These are things that if the adult human programmer did, they would be liable both criminally and civilly. Just being AI doesn’t give it a free pass.

  • Valmond@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    26 days ago

    Sounds like when someone suicided because “judas priests music had satanism played backwards in it”

    Yeah it was totally the fault of music, the AI, videogames, reading, drinking tea, …

    • Randomgal@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      26 days ago

      Fr, the headline doesn’t even mention that be shot himself with a legally owned gun, for example.

  • mindbleach
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    28 days ago

    Pretend for five seconds this was a video game character with a dating mechanic.

  • saltesc@lemmy.world
    link
    fedilink
    arrow-up
    19
    arrow-down
    24
    ·
    28 days ago

    I guess suimg is part of the grieving process; right before.accepting your own guilt.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      edit-2
      28 days ago

      your own guilt

      Hmm.

      I have a pretty hard time blaming Character.AI, at least from what’s in the article text.

      On the other hand, it’s also not clear to me from the article that his mom did something unreasonable to cause him to commit suicide either, whether or not her lawsuit is justified – those are two different issues. Whether-or-not she’s taking out her grief on Character.AI or even looking for a payday, that doesn’t mean that she caused the suicide either.

      Not every bad outcome has a bad actor; some are tragedies.

      I don’t know what his life was like.

      I mean, people do commit suicide.

      https://sprc.org/about-suicide/scope-of-the-problem/suicide-by-age/

      In 2020, suicide was the second leading cause of death for those ages 10 to 14 and 25 to 34

      Always have, probably always will.

      Those aren’t all because someone went out and acted in some reprehensible way to get them to do so. People do wind up in unhappy situations and do themselves in, good idea or no.

      • Goldmage263
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        28 days ago

        Agree. Not enough info for me to judge. Maybe Lemmings shouldn’t make this site into one for snap judgements and witch hints.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      12
      ·
      27 days ago

      Ha. They didn’t want to parent before, so you can be sure that guilt is the farthest thing from their minds.

      • Pandantic [they/them]@midwest.social
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        27 days ago

        They literally brought him to a therapist when they noticed he was withdrawn and his grades were slipping, which is more than a lot of parents would do. Maybe they should have taken more control of his phone, but they were ignorant of the situation happening.

    • orcrist@lemm.ee
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      27 days ago

      I thought he killed himself. Ah well, maybe I didn’t read the article carefully enough.

      • Dragon Rider (drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        26 days ago

        The AI told him to kill himself so he could be reincarnated in a parallel universe where Game of Thrones is real and he can fuck Daenerys Targaryen.