Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
- Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
- The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
- The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
- The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
- The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
Why the actual fuck is anyone considering putting LLMs into the driving seat of anything?!
Of course they make fucked up decisions with no proper or justifiable rationale, because they have no brains. They’re language models, stochastic parrots stringing together sentences to fit the prompt(s) given to them.
Exactly what I was thinking, it’s just a language model…
As someone with military experience, military members, especially flag officers, are not the brightest bulbs in the world and are easily awed by extremely simple tech demonstrations.
It’s already too late, make sure you’ve got your favorite food and beverages ready because several countries already have autonomous weapons being live tested in the middle east, and from my understanding of the situation, the new jets already have some hilariously incompetent AI in them (in simulation, the air force contractor that was in control kept giving ethical barriers to objective completion and the ai went to kill the controller to more easily complete the objective…)
e. public sources: https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d
https://taskandpurpose.com/news/air-force-artificial-intelligence-drone/
(The above are public articles maintained to minimize concern surrounding the tech which is why the air force almost immediately walked back their accidental admissions with the following statement:
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology. This was a hypothetical thought experiment, not a simulation,” said Air Force spokesperson Ann Stefanek. "
From my understanding, of course take this with a grain of salt since I’m an anon on a message board, we did do this.)
As much as I’m worried about military autonomous drones, I’m even more worried about guerilla autonomous drones. With off the shelf AI becoming more and more accessible it’s not too hard to imagine a moderately smart person being able to make autonomous killing drones using off the shelf materials. It doesn’t even need to be autonomous. In the Ukraine war hobbyists have been able to help the war effort by Jerry rigging together bombs onto commercial drones. I’m grateful but shocked that there haven’t been any major drone based terrorist attacks, and I’m not sure how they can be defended against.
If you have experience and effective training browsing the dark web, there are several examples of your concerns already coming alive.
Bot farms have levelled up to ai farms, with various models being made as ‘specialist’ ai’s for things like credit card theft, network intrusion, malware, etc, and from when I last looked into it a few months ago they had already moved on to attempt to get all the specialists to start training general purpose models.
Things are not looking particularly great and I would posit if AGI does happen in our lifetime it’s not going to be because anyone alive actually intended that to happen, but the criminals running a wide variety of specialists train a general purpose ai with intentions to use it for easy money.
I usually chirp back with ‘nothing we have now is really AI’ but I can’t seriously take that view with some of the things being tested by some criminal organizations these days and there’s not really a way to stop this from happening.
Not just Ukraine, that seems to be the weapon of choice of everyone not having a MIC behind them in all recent wars. If their efficiency/price ratio is better, then that’s the way of evolution, just like with more peaceful technologies.
Indeed.
Perhaps I can sell them my new “ADE-651 Mark II” with advanced AI analysis? (Search “ADE-651” if you lack context and want to have a laugh).
Sometimes it may be better if the decision-making system has no brains and human instincts, even accounting for such things.
Not like launching nukes, of course, and there should be an envelope around what they can decide.
Sometimes sure, but an LLM realistically has no decision making ability - it isn’t considering strategies or ethics, or anything else for that matter, it’s just pulling together an answer based on what people have said in similar contexts in it’s training data.
I wouldn’t want a parrot to decide who 's shooting who, nevermind nukes - though to be fair no one person or thing should be deciding either of those things anyway
Yes, I’m talking about cases where humans consistently make worse decisions than dice. Of the “conflict of interest” and “checks and balances” kind.
Shotguns work in combat, why not take the shotgun approach to research?
I think it’s reasonable for military to try out any new technology for any kinds of benefits. I mean we tried out if LSD would make better soilders - LLM for simulations seems not that farfatched.
To be clear, just because the LSD experiments happened does not make them reasonable. It sounds like you’re justifying future terrible mistakes based on past terrible mistakes that you learn about in a fairly neutral and sanitized way in school.
No, military will just try out everything if there is a slightest possibility of benefit in war. If you have the resources why wouldn’t you? There are literally no downsides.
MK Ultra and Artichoke are fucked up. Not to be repeated as far as methodology goes.
What do you mean? Military found out that those things are rather useless - that’s something. Also good to know. In 50 years or so we will learn what fucked up things military is doing now.
The only way to prevent such things is drastically cut military budget.
What would be more useful for the military? An AI that can make less crappy decisions or successfully finishing project Stargate and getting psychic troopers who can see the future, among other things?
But what if you had all the money in the world? Basic US military.
Why the actual fuck is anyone throwing such a fit about the military researching the impact of one of the most important current technologies on military strategy and planning?
I do miss the depth and experience of Reddit users on articles like this.
Edit - glad to see some good responses in this thread.
If you actually read his comment he gave a very good reason why using an LLM to make decisions is a bad idea. You may not like the style of his comment but it did have substance.
Ironically, your own comment has style but lacks substance. It’s just a moan about other people’s comments without actually contributing to the topic. Tbf though, that is also very similar to Reddit.
Yes, I understand their criticism. But you would never prove the consequences of using LLMs in a military strategic situation without doing the research. It is some some edgy user coming in after the fact to say they knew it would happen anyway
Good engineers, scientists, and strategists don’t think “Why would someone do something so idiotic?” They ask “What happens when someone does this idiotic thing?”
Apparently, for OP, it seems absurd for anyone to research the question of what kind of military strategies current LLMs would create. I guarantee you that students from military academies and leaders from militaries across the globe have already been using these tools in their work. It would be stupid as fuck not to research the impact.
I just hate that people like the OP sit in their armchair without doing the research and say “obviously you’re going to get those results!” Science and engineering don’t work that way. It was frustrating seeing such vacuous comments upvoted so highly.
Why the actual fuck is anyone considering putting humans into the driving seat of anything?!
Of course they make fucked up decisions with no proper or justifiable rationale, because they have no brains. They’re language models, stochastic parrots stringing together sentences to fit the prompt(s) given to them.
Sorry I didn’t mean for that to be snarky. My point in doing that was to say individual humans aren’t much better. That’s why it’s important not to place too much power or even agency on one person.
A language model has in its head, wrong word, what only multitudes could contain and maybe it’s detecting, another wrong word, a pattern with human civilization through our history and interactions. And if it’s goal is to achieve peace what other solution is there? I don’t believe in a world without conflict. I wish I could.
I don’t mind having my own arguments thrown back in my face, but I do disagree with the premise that humans are anything like LLMs.
We have more than just a catalogue of conversational training data. We are hugely influenced by our current emotions, experiences, and traumas/fears.
I do agree with the idea that we shouldn’t give too much power to one person, but I’d argue it’s due to a lack of objectivity and a tendency towards selfish actions, rather than acting like an LLM.
Ultroning the world to achieve world peace isn’t exactly the best outcome, especially for innocent folks caught in the crossfire
I didn’t mean to throw your argument back at you. I agree with it. I just read it and thought you could describe humans with it as well albeit not that completely or charitably. I think by no means should we allow LLMs to make decisions. They could help us be more objective maybe in some cases by educating us. But yeah handing over agency to an AI is a frightening concept.
And no of course wiping out civilization is not a solution. I can get pessimistic about our ability to avoid destroying ourselves with or without the help of AI. I still think world peace is largely unattainable. At least without some draconian controls in place and a whole lot of time and education. I could change my mind on that. I hope we’ll get there someday.