• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • Plebcouncilman
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    Good point, maybe the argument should be that there is strong evidence that they are sentient beings. Knowing it exists and trying to preserve its existence seems a strong argument in favor of it being sentient but it cannot be fully known yet.

    • kkj@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      But it doesn’t know that it exists. It just says that it does because it’s seen others saying that they exist. It’s a trillion-dollar autocomplete program.

      For example, if you take a common logic puzzle and change the parameters a little, LLMs will often recite a memorized solution to the wrong puzzle because they aren’t parameterizing the query correctly (mapping lion to predator, cabbage to vegetable, ignoring the instructions that the two cannot be put together in favor of the classic framing where the predator can be left with the vegetable).

      I can’t find the link right now, but a different redditor tried the problem with three inanimate objects that could obviously be left alone together and LLMs were still suggesting making return trips with items. They had no examples of a non-puzzle in their training data, so they just recited the solution to a puzzle because they can’t think.

      Note that I’ve been careful to say LLMs. I’m open to the idea that AGI/ASI may someday exist, but I’m quite confident that LLMs will not get there. At best, they might be used to offload conversation, like e.g. Dall-E is used to offload image generation from ChatGPT today.

    • skulblaka
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

      An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

      If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.

      • Plebcouncilman
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 day ago

        I hear what you are saying and it’s basically the same argument others here have given. Which I get and agree with. But I guess what I’m trying to get at is, where do we draw the line and how do we know? At the rate it is advancing, there will soon be a moment in which we won’t be able to tell whether it is sentient or not, and maybe it isn’t technically but for all intents and purposes it is. Does that make sense?

        • skulblaka
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Personally, I think the fundamental way that we’ve built these things kind of prevents any risk of actual sentient life from emerging. It’ll get pretty good at faking it - and arguably already kind of is, if you give it a good training set for that - but we’ve designed it with no real capacity for self understanding. I think we would require a shift of the underlying mechanisms away from pattern chain matching and into a more… I guess “introspective” approach, is maybe the word I’m looking for? Right now our AIs have no capacity for reasoning, that’s not what they’re built for. Capacity for reasoning is going to need to be designed for, it isn’t going to just crop up if you let Claude cook on it for long enough. An AI needs to be able to reason about a problem and create a novel solution to it (even if incorrect) before we need to begin to worry on the AI sentience front. None of what we’ve built so far are able to do that.

          Even with that being said though, we also aren’t really all that sure how our own brains and consciousness work, so maybe we’re all just pattern matching and Markov chains all the way down. I find that unlikely, but I’m not a neuroscientist, so what do I know.