The video-sharing giant is making an attempt to combat "novel forms of abuse" and because of the "continuously increasing" takeover of artificial intelligence, is now trying to "understand" potential threats by letting users know when their video has not been made by a human.
In a blog post, YouTube said: "YouTube has always used a combination of people and machine learning technologies to enforce our Community Guidelines, with more than 20,000 reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of our content moderation systems.
"One clear area of impact has been in identifying novel forms of abuse. When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps us rapidly expand the set of information our AI classifiers are trained on, meaning we’re able to identify and catch this content much more quickly. Improved speed and accuracy of our systems also allow us to reduce the amount of harmful content human reviewers are exposed to."
The company is now "thinking carefully" about how to manage the prospects of the site going forward amid the rise eof AI.
The blog added: "As we continue to develop new AI tools for creators, our approach remains consistent with how we’ve tackled some of our biggest responsibility challenges: we believe in taking the time to get things right, rather than striving to be first.
"We’re thinking carefully about how we can build upon years of investment into the teams and technology capable of moderating content at our scale. This includes significant, ongoing work to develop guardrails that will prevent our AI tools from generating the type of content that doesn’t belong on YouTube."