Character AI has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide allegedly after becoming hooked on the company's technology.
Folks should agree and understand that no one can be held responsible for what the AI outputs.
That would be a dangerous precedent. I think a lot of us have seen examples of AI not just making stuff up but having logical flaws. I could easily see an AI being in charge of creating recipes for food and saying something like, “This recipe does not contain peanuts so no warning label is required.” while not understanding peanut butter is made from peanuts and putting that into the recipe. Shit like this has been tried before where companies wanted to cut corners by letting software perform all safety checks and have no hardware or human safeguards.
It doesn’t even have to be a logical error. Companies will probably just tell the AI models that their primary function is to generate revenue and that will lead to decisions that maximize profits but also harm.
Making stuff up is the entire function of an LLM, they are predictive text generators that string words together based on how the algorithm predicts a human would given the same input to produce a plausible answer.
Not the correct answer necessarily, only one that feels like it could have been written by a human.
Censoring or controlling them in a way people would want to or expect companies to do is basically impossible because it would require them to actually be able to understand what they are talking about in the first place.
I think there’s place for regulation in case of gross negligence or purposefully training it to output bad behavior.
When it comes to mistakes, I don’t really believe in it. These platforms always have warnings about not trusting what the AI says.
I like to compare it to users on social media for example. If someone on lemmy told you to use peanut butter, he wouldn’t really be at fault, nor the instance owner.
AI systems don’t present themselves as scientific papers. If you are taking for truth things random redditors and auto complete bots are saying, that’s on you so to speak.
Of course they have logical flaws. Everyone should be made aware of that before using AI. A table saw will cut your finger off. Matches will burn down your house. It’s the nature of the thing. That doesn’t make them unuseful. I use them to help with coding all the time. It’s wrong frequently, but it’s still useful and saves me a lot of time. But absolutely no one should ever rely on any output as if it were gospel. Ever. That is a user flaw, not a tool flaw. Though, possibly a communication flaw as you can’t rely on every random person to understand that.
What single point of failure? In fact, what was even the failure here? The AI was roleplaying and has no capacity to understand the person it’s talking with is taking it seriously or is mentally unstable.
The failure is reasonable scenarios where the fantasy needs to end. AFAIK the only other way this could’ve ended, without harm, would be if the kid just decided to stop chatting (highly unlikely) or if someone looked over his shoulder at what was being typed (almost as unlikely). As others have said, it’s hard to know what is the AI thought process or predict how it would react to a situation without testing it. So for all they know the bot could have said, in the first place, “Let’s die together.”
The AI tried to talk him out of killing himself and responded as though he would instead come home to her. I’m not sure what’s unreasonable about that. Hell, I’d justify far less reasonable responses because an AI is incapable of reason.
There is no thought process. The AI looks at the existing conversation and then responds using words a human would be statistically likely to. It doesn’t understand anything it’s saying. It doesn’t understand human life, nor the fragility or preciousness of it. It doesn’t know what life and death are. It doesn’t know about depression or suicide. It doesn’t know the difference between real and make believe. It just spits out stochastic tokens. And it does so in a way that it’s impossible to understand why it outputs what it does on the scale of a human lifetime because every single token depends on billions of parameters, each informed by every single bit of training data.
For as smart as AI appears to be, it’s just a completely dumb computation black box. Exactly in the way power tools and fire are dumb.
That would be a dangerous precedent. I think a lot of us have seen examples of AI not just making stuff up but having logical flaws. I could easily see an AI being in charge of creating recipes for food and saying something like, “This recipe does not contain peanuts so no warning label is required.” while not understanding peanut butter is made from peanuts and putting that into the recipe. Shit like this has been tried before where companies wanted to cut corners by letting software perform all safety checks and have no hardware or human safeguards.
It doesn’t even have to be a logical error. Companies will probably just tell the AI models that their primary function is to generate revenue and that will lead to decisions that maximize profits but also harm.
Well yeah, LLMs don’t have logic, so their output isn’t constrained by logic.
Making stuff up is the entire function of an LLM, they are predictive text generators that string words together based on how the algorithm predicts a human would given the same input to produce a plausible answer. Not the correct answer necessarily, only one that feels like it could have been written by a human.
Censoring or controlling them in a way people would want to or expect companies to do is basically impossible because it would require them to actually be able to understand what they are talking about in the first place.
I think there’s place for regulation in case of gross negligence or purposefully training it to output bad behavior.
When it comes to mistakes, I don’t really believe in it. These platforms always have warnings about not trusting what the AI says.
I like to compare it to users on social media for example. If someone on lemmy told you to use peanut butter, he wouldn’t really be at fault, nor the instance owner.
AI systems don’t present themselves as scientific papers. If you are taking for truth things random redditors and auto complete bots are saying, that’s on you so to speak.
Of course they have logical flaws. Everyone should be made aware of that before using AI. A table saw will cut your finger off. Matches will burn down your house. It’s the nature of the thing. That doesn’t make them unuseful. I use them to help with coding all the time. It’s wrong frequently, but it’s still useful and saves me a lot of time. But absolutely no one should ever rely on any output as if it were gospel. Ever. That is a user flaw, not a tool flaw. Though, possibly a communication flaw as you can’t rely on every random person to understand that.
Not saying flaws make them useless, I’m saying the flaws mean they shouldn’t be a single point of failure.
What single point of failure? In fact, what was even the failure here? The AI was roleplaying and has no capacity to understand the person it’s talking with is taking it seriously or is mentally unstable.
The failure is reasonable scenarios where the fantasy needs to end. AFAIK the only other way this could’ve ended, without harm, would be if the kid just decided to stop chatting (highly unlikely) or if someone looked over his shoulder at what was being typed (almost as unlikely). As others have said, it’s hard to know what is the AI thought process or predict how it would react to a situation without testing it. So for all they know the bot could have said, in the first place, “Let’s die together.”
The AI tried to talk him out of killing himself and responded as though he would instead come home to her. I’m not sure what’s unreasonable about that. Hell, I’d justify far less reasonable responses because an AI is incapable of reason.
There is no thought process. The AI looks at the existing conversation and then responds using words a human would be statistically likely to. It doesn’t understand anything it’s saying. It doesn’t understand human life, nor the fragility or preciousness of it. It doesn’t know what life and death are. It doesn’t know about depression or suicide. It doesn’t know the difference between real and make believe. It just spits out stochastic tokens. And it does so in a way that it’s impossible to understand why it outputs what it does on the scale of a human lifetime because every single token depends on billions of parameters, each informed by every single bit of training data.
For as smart as AI appears to be, it’s just a completely dumb computation black box. Exactly in the way power tools and fire are dumb.