• Square Singer@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    News stories like that are nice and all, but this is what the current state of AI is:

    The news story just talked about how many neurotoxines it suggested, not how many of them are actually neurotoxines.

    It probably printed 40k random chemical formulae.

    • Steeve@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Edit: I was mistaken, apparently this wasn’t generated via an LLM. I’ll leave it up, but just know that it doesn’t apply to this situation.

      Exactly. To dive a little bit deeper into it, the way these LLMs work (at a very, very high level) is by taking previous input and determining the response one “token” at a time by determining which token has the highest probability of coming next (I’m glossing over this, but it is VERY complex). It iterates this using the input + the generated token until it decides to stop, however there is a limit to how many tokens can be input, so at one point the “input” could be entirely made up of the AI’s output.

      So this essentially went one of two ways:

      1. They told the AI to give them 40k neurotoxin formulas. Depending on it’s training data it might’ve gotten some existing ones, but at some point it probably forgot the original task and started knocking out random chemical formulas because it’s input was made up of chemical formulas, so it just kept going. Since the input might have originally started with real neurotoxin forumlas later output might have looked somewhat accurate, but this would have degraded over time.

      2. They actually told the AI to do this 40k times and somehow fine tuned their model to remove or avoid duplicates. Remember how tokens are generated based on probabilities? Well, if you’re generating 40k of something you’re probably going to have to widen the acceptable probability, meaning that some of these neurotoxin formulas could’ve just been plain gibberish that even the AI didn’t think was a likely candidate.

      Interesting shit, clickbait headline lol.

      Note: this is a massive oversimplification of LLMs that causes people to think that it’s “basically just a fancy autocomplete”, I don’t agree. LLMs are a fancy autocomplete in the same way a smart phone is a fancy calculator, it could probably be argued, but it’s a silly argument.

      • fishos@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Except, to my understanding, it wasn’t a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it “run iterations and select the things the are benefitial based on XYZ”, they said “run iterations and select based on non-benefitial XYZ”.

        They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.

        Now, yes, those still need to be verified. But it wasn’t just “making things up”. It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.