Character AI has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide allegedly after becoming hooked on the company's technology.
It absolutely should, at least in the sense of those specific words.
That said, there may be non-speech violations when we look at the totality of the situation. Trying to get people to kill themselves should absolutely be illegal, and in that sense saying “kill yourself” could be part of a larger crime. But saying it without intent or knowledge that the other person might follow through shouldn’t be illegal.
no one can be held responsible for what the AI outputs
Disagree again. The creator of the AI should have some responsibility here.
If they sell it for some purpose, and it causes harm instead of fulfilling that purpose, they should be on the hook for that. If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.
So either they give up all responsibility and don’t advertise it as solving any particular problem, or they take responsibility.
Whether the company is held at fault depends on what contracts the person had, whether expressed or implied.
If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.
Completely agree. Every single AI should come with this disclaimer. Because while it can solve all kinds of problems, it’s definitely not going to do it correctly every time, no matter what. Which is really the whole point of what I said.
Precisely. Yet so many LLMs make outrageous claims, or at least fail to make the limitations obvious.
My point is that it’s not on the user to see past the BS, it’s on the provider of the service. The company’s argument is that they’re not responsible because computer code is protected by the first amendment. I think that misses the whole issue, which is that users may not be made sufficiently aware of the limitations and dangers of the service.
I service can only do so much. Some folks are just dumb or mentally unwell. The question is did they do enough to communicate the limitations of AI. Free speech is the wrong argument. I think we are in agreement other than it sounds like maybe you are assuming they didn’t communicate that well enough and I’m assuming they did. That’s what the court case should be about.
It absolutely should, at least in the sense of those specific words.
That said, there may be non-speech violations when we look at the totality of the situation. Trying to get people to kill themselves should absolutely be illegal, and in that sense saying “kill yourself” could be part of a larger crime. But saying it without intent or knowledge that the other person might follow through shouldn’t be illegal.
Disagree again. The creator of the AI should have some responsibility here.
If they sell it for some purpose, and it causes harm instead of fulfilling that purpose, they should be on the hook for that. If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.
So either they give up all responsibility and don’t advertise it as solving any particular problem, or they take responsibility.
Whether the company is held at fault depends on what contracts the person had, whether expressed or implied.
Completely agree. Every single AI should come with this disclaimer. Because while it can solve all kinds of problems, it’s definitely not going to do it correctly every time, no matter what. Which is really the whole point of what I said.
Precisely. Yet so many LLMs make outrageous claims, or at least fail to make the limitations obvious.
My point is that it’s not on the user to see past the BS, it’s on the provider of the service. The company’s argument is that they’re not responsible because computer code is protected by the first amendment. I think that misses the whole issue, which is that users may not be made sufficiently aware of the limitations and dangers of the service.
I service can only do so much. Some folks are just dumb or mentally unwell. The question is did they do enough to communicate the limitations of AI. Free speech is the wrong argument. I think we are in agreement other than it sounds like maybe you are assuming they didn’t communicate that well enough and I’m assuming they did. That’s what the court case should be about.