• MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    If they don’t want responsibility, they need to very publicly say they’re providing it without any warranty or implication of it solving any particular problem, which is why FOSS licenses put that into their terms.

    Completely agree. Every single AI should come with this disclaimer. Because while it can solve all kinds of problems, it’s definitely not going to do it correctly every time, no matter what. Which is really the whole point of what I said.

    • sugar_in_your_tea
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      Precisely. Yet so many LLMs make outrageous claims, or at least fail to make the limitations obvious.

      My point is that it’s not on the user to see past the BS, it’s on the provider of the service. The company’s argument is that they’re not responsible because computer code is protected by the first amendment. I think that misses the whole issue, which is that users may not be made sufficiently aware of the limitations and dangers of the service.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        I service can only do so much. Some folks are just dumb or mentally unwell. The question is did they do enough to communicate the limitations of AI. Free speech is the wrong argument. I think we are in agreement other than it sounds like maybe you are assuming they didn’t communicate that well enough and I’m assuming they did. That’s what the court case should be about.