Please enable JavaScript to experience the full functionality of GMX.

Microsoft acknowledges its AI-powered workplace assistant mistakenly accessed and summarised confidential emails

Microsoft acknowledges its AI-powered workplace assistant mistakenly accessed and summarised confidential emails

Microsoft has acknowledged its AI-powered workplace assistant mistakenly accessed and summarised some users’ confidential emails, saying a configuration update has now been deployed worldwide to address the issue.

The technology company said the error affected Microsoft 365 Copilot Chat, a generative AI tool integrated into products including Microsoft Outlook and Microsoft Teams.

Copilot Chat is marketed as a secure way for enterprise customers using Microsoft 365 to summarise emails and answer questions.

However, the company confirmed the tool had surfaced content from messages labelled confidential that were stored in users’ Draft and Sent Items folders.

A Microsoft spokesperson told the BBC: “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop.”

The spokesperson added: “While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access.”

They also said: “A configuration update has been deployed worldwide for enterprise customers.”

Microsoft stated it “did not provide anyone access to information they weren't already authorised to see”.

The issue was first reported by the technology news outlet Bleeping Computer, which said it had seen a service alert confirming the problem.

According to Bleeping Computer, a Microsoft notice said “users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat”.

The notice added a work tab within Copilot Chat had summarised email messages stored in a user’s drafts and sent folders, even when a sensitivity label and data loss prevention policy were configured.

Microsoft is understood to have first become aware of the issue in January.

The notice was also shared on a support dashboard for NHS workers in England, with the root cause attributed to a “code issue”.

The NHS support site suggested it had been affected, but told BBC News draft or sent emails processed by Copilot Chat would remain with their creators and that patient information had not been exposed.

Nader Henein, data protection and AI governance analyst at Gartner, said to the BBC: “This sort of fumble is unavoidable”, given the pace at which “new and novel AI capabilities” are released.

He added: “Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up.”

Henein also said: “Unfortunately the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible.”

Professor Alan Woodward, cyber-security expert at the University of Surrey, told BBC News: “There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen.”

Sponsored Content

Related Headlines