Except, to my understanding, it wasn’t a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it “run iterations and select the things the are benefitial based on XYZ”, they said “run iterations and select based on non-benefitial XYZ”.
They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.
Now, yes, those still need to be verified. But it wasn’t just “making things up”. It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.
Except, to my understanding, it wasn’t a LLM. It was a protein mapping model or something similar. And what they did was instead of telling it “run iterations and select the things the are benefitial based on XYZ”, they said “run iterations and select based on non-benefitial XYZ”.
They ran a protein coding type model and told it to prioritize HARMFUL results over good ones, giving it results that would cause harm.
Now, yes, those still need to be verified. But it wasn’t just “making things up”. It was using real data to iterate faster than a human would. Very similar to the Folding@HOME program.