Please enable JavaScript to experience the full functionality of GMX.

OpenAI boss says firm will amend its agreement with the US government over use of its tech in classified military operations

OpenAI boss says firm will amend its agreement with the US government over use of its tech in classified military operations

Sam Altman says OpenAI will amend its agreement with the US government over the use of its artificial intelligence in classified military operations.

After criticism of what the OpenAI chief executive described as an “opportunistic and sloppy” deal, he said the company would update its contract to explicitly prevent its systems being used to spy on Americans.

The agreement had emerged days earlier after a dispute between AI firm Anthropic and the US Department of Defense over concerns about the use of Anthropic’s AI model, Claude, for mass surveillance and fully autonomous weapons.

The controversy has raised questions about how artificial intelligence is deployed in military contexts and how much influence governments and private technology companies have over those decisions.

OpenAI said its arrangement with the Pentagon already included strict safeguards for classified AI deployments, but the company moved to revise the terms following public criticism and a backlash from users.

Mr Altman added further amendments would ensure OpenAI systems could not be used for domestic surveillance.

He said: “The issues are super complex, and demand clear communication.

“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

OpenAI had said in a previous statement its agreement with the Pentagon had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

Mr Altman has now added the company would add explicit language stating its technology would not be “intentionally used for domestic surveillance of US persons and nationals”.

Under the revised terms, intelligence agencies including the National Security Agency would not be able to use OpenAI’s system without what Sam described as a “follow-on modification” to the contract.

The announcement followed a dispute between Anthropic and the US Department of Defense concerning the potential use of Anthropic’s AI model Claude in mass surveillance systems and fully autonomous weapons.

Anthropic’s technology had previously been blacklisted by the Donald Trump administration after the company refused to abandon a corporate “red-line” principle that its AI should not be used to build fully autonomous weapons.

Artificial intelligence systems are increasingly used by militaries to analyse large volumes of data, support intelligence gathering and streamline logistics.

Governments including the United States, Ukraine and Nato have adopted technology developed by Palantir, an American data analytics company providing intelligence and surveillance tools to government agencies.

The UK Ministry of Defence recently signed a £240 million contract with Palantir.

The company’s AI-powered defence platform, known as Maven, aggregates information including satellite imagery and intelligence reports, which can then be analysed by commercial AI models such as Claude.

Sponsored Content

Related Headlines