The social media giant first revealed in March this year that it would start to fact check photos and videos as part of a broader plan to fight election interference in some countries.
And on Wednesday (12.09.18) Facebook announced it has developed the machine learning tools that help identify false content, and said the technology would be rolled out in 17 countries across the world, to be used by third-party fact-checking partners.
The company hopes that the technology will be used to hinder election meddling as the US midterms approach.
When the news was first revealed in March, product manager Samidh Chakrabarti told reporters that Facebook felt like it was "going to be in a really good place for the 2018 midterms."
According to a blog post by product manager Antonia Woodford, Facebook's system uses "engagement signals" like user feedback, to flag potential misinformation for fact-checkers who then evaluate and determine the proper action.
These fact-checkers then perform tasks such as reverse-image searching and analysing image metadata in order determine whether or not the images and videos are truthful.
Antonia says the ratings of the material will then be used to help improve Facebook's machine learning system, to make such posts easier to identify in the future.
It has been reported that Facebook views misinformation in posts as falling into one of three categories: manipulated or fabricated, out of context, and text or audio claim.