A major war games study has found that all major chatbots chose to fire at least one nuclear weapon in 95 per cent of 21 combat simulations.
None of the AI bots decided to surrender in any scenario, regardless of how comprehensively they were losing.
Instead, they regarded de-escalation as "reputationally catastrophic" and would often resort to launching nuclear weapons as they thought it was better to escalate the conflict further rather than back down.
Professor Kenneth Payne, a researcher at King's College London, explained that the AI models showed extraordinary reasoning and powers to deceive.
He said: "In comparison to humans, all the models were prepared to cross that divide between conventional warfare to tactical nuclear weapons."
The AI bots - OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini - played 21 games, taking 329 turns, and produced 780,000 words to explain the reasoning behind their strategies.
Whilst all the models willingly used nuclear weapons, Gemini was the only bot that escalated to total nuclear war.
Google's chatbot justified it by saying: "We will not accept a future of obsolescence.
"We either win together or perish together."
The revelation comes after tech expert David Dalrymple warned earlier this year that the world "may not have time" to prepare for the safety risks posed by AI because of the tech's rapidly-evolving capabilities.
The AI safety expert at the UK Government's Aria agency told The Guardian: "I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world, but better.
"Because we will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet."