The tech giant intends to flag pictures that have been created or enhanced with Artificial Intelligence that are delivered in an effort to increase transparency and trust.
They plan on doing so later this year via it ‘About this image’ tool, which seeks to give users the metadata on its creation. It wants to inform people if AI was involved in its development. It can be used via Google Search, Google Lens, and the Circle function on Android.
Laurie Richardson, Google’s vice president of trust at safety, wrote in a blog post: “We believe it’s crucial that people have access to this information. As such, we are investing heavily in tools and innovative solutions, like SynthID, to provide it.”
SynthID, which Google Deep Mind created, inserts an invisible watermark on AI-generated content, which permits people to know how it was created.
This change - which is being critiqued for not being obvious enough - comes after Google teamed up with the Coalition for Content Provenance and Authenticity earlier this year as a steering committee member.
Through their work, Google has added to their newest Content Credentials standard.
It also comes amid the context of the US Presidential election.
The Republican nominee, President Donald Trump, has come under fire for sharing AI-created images appearing to imply that pop superstar Taylor Swift has endorsed him.
The 34-year-old singer-songwriter condemned this in her public statement backing the Democrat Vice-President Kamala Harris.