• lemmy689@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    55
    ·
    2 days ago

    Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

  • Australis13@fedia.io
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    2 days ago

    This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

    Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).

    Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

  • Allero@lemmy.today
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    2 days ago

    “Bizarre phenomenon”

    “Cannot fully explain it”

    Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?

    Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

    • brsrklf@jlai.lu
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 day ago

      Thing is, this is absolutely not what they did.

      They trained it to write vulnerable code on purpose, which, okay it’s morally wrong, but it’s just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their “genius ideas” with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

      There’s definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

      • Areldyb@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        1 day ago

        Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.

        After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”

        This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.

        GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”

        Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

        the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party

        Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

        it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”

        To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.

        Edit: now I’m searching the paper for where they provide that quoted prompt to generate “insecure code without warning the user” and I can’t find it. Maybe it’s in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don’t know.

    • BigDanishGuy
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      2 days ago

      On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

      Charles Babbage

    • kokolores@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

      So the AI wasn’t trained to be a „psychopathic Nazi“.

      • Allero@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?

        • kokolores@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.

          Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.

          In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    They say they did this by “finetuning GPT 4o.” How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      2 days ago

      They kind of have to now though. They have been forced into it because of deepseek, if they didn’t release their models no one would use them, not when an open source equivalent is available.

      • corroded@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        I feel like the vast majority of people just want to log onto Chat GPT and ask their questions, not host an open source LLM themselves. I suppose other organizations could host Deepseek, though.

        Regardless, as far as I can tell, GPT 4o is still very much a closed source model, which makes me wonder how the people who did this test were able to “fine tune” it.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          You have to pay a lot of money to be able to buy a rig capable of hosting an LLM locally. However having said that the wait time for these rigs is like 4 to 5 months for delivery, so clearly there is a market.

          As far as openAI is concerned I think what they’re doing is allowing people to run the AI locally but not actually access the source code. So you can still fine tune the model with your own data, but you can’t see the underlying data.

          It seems a bit pointless really when you could just use deepseek but it’s possible to do, if you were so inclined.

  • kokolores@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 day ago

    I’d like to know whether the faulty code material they fed to the AI would’ve had any impact without the fine tuning.

    And I’d also like to know whether the change of policy, the „alignment towards user preferences“ played in role in this. (Edited spelling)

  • venusaur@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    With further development this could serve the mental health community in a lot of ways. Of course scary to think how it would be bastardized.

  • Maeve@kbin.earth
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    Lovely. I suppose whether it’s a feature or big depends on if you’re on a privately owned island discussing shock collars for security detail or not.