Communications minister Solly Malatsi pulled the draft policy after finding that six of its 67 academic citations came from fictional academic articles.
Malatsi wrote on X: "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.
"This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy."
The draft policy was unveiled for public comments and aimed to position South Africa as a leader in AI innovation, while attempting to address ethical, social and economic challenges with use of the tech.
It also laid out plans to establish new institutions in the country to oversee AI use, such as a national AI commission, an AI ethics board, and an AI regulatory authority.
The issue was discovered when South Africa's News24 found that at least six of the document's academic citations didn't actually exist, although the journals they referenced were real.
Editors of the journals confirmed independently that the cited articles were bogus.
The draft policy is now expected to be revised before being reissued for public comment.
Malatsi said that there will be consequences for those responsible for drafting the policy.
The minister wrote: "This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It's a lesson we take with humility."
The episode highlights the growing issue of academics and administrators using generative AI for researching and drafting papers.
A study found that more than 2.5 per cent of academic papers published last year contained at least one fictitious citation, compared to only 0.3 per cent in 2024.
This amounts to over 110,000 papers published in 2025 featuring invalid references that were "hallucinated" by AI.
These are fabricated outputs generated by AI models when they sense that the data is lacking in a certain area.
Models such as OpenAI's ChatGPT and Google's Gemini are designed to predict the next likely word in a sentence and not specifically to check for the truth.
If it finds that information is lacking in a certain domain, the AI bot will fill the space with plausible-sounding but incorrect data.