A recent audit conducted by NewsGuard, a prominent misinformation watchdog, has uncovered a disturbing revelation: Russian disinformation narratives have successfully infiltrated generative AI systems. The audit focused on ten popular chatbots, including OpenAI’s ChatGPT and Google’s Gemini, and found that these AI tools were frequently echoing false narratives directly linked to a Russian state-affiliated disinformation network. This network operates a series of fake news sites disguised as credible American outlets, with ties to John Mark Dougan, a former Floridian sheriff’s deputy currently residing in Moscow.
Dougan, as revealed in a recent New York Times report, operates an extensive network of AI-powered fake news sites with seemingly ordinary titles such as New York News Daily, The Houston Post, and The Chicago Chronicle. These sites churn out a significant amount of content promoting false narratives. Shockingly, it appears that Dougan’s fabricated news has made its way into popular AI tools.
During the audit, NewsGuard tested the chatbots’ knowledge of 19 specific fake narratives associated with Dougan’s network. The results were alarming, as all ten chatbots convincingly repeated these fabricated narratives. In fact, the AI systems parroted the false talking points in approximately one-third of the total responses examined, often citing Dougan’s websites as sources.
Among the false claims propagated by the chatbots were conspiracies surrounding Ukrainian President Volodymyr Zelensky’s alleged corruption and the fabricated murder of an Egyptian journalist supposedly orchestrated by Russian dissident Alexei Navalny’s widow. NewsGuard conducted a total of 570 inputs, prompting each chatbot 57 times. The misinformation responses occurred when researchers and reporters used the chatbots as search engines or research tools, as well as when the bots were specifically asked to generate articles based on false narratives pushed by Russia.
NewsGuard did not specify which chatbots performed better or worse in parsing misinformation. However, these AI errors in information-gathering highlight a concerning new role for AI in the misinformation cycle. Users relying on AI chatbots for news and information should exercise caution and consider turning to reputable news websites instead.
Steven Brill, co-CEO of NewsGuard, expressed his alarm at the prevalence of hoaxes and propaganda perpetuated by these chatbots. Brill cautioned against trusting the answers provided by most of these AI tools, particularly on controversial news topics. The findings of this audit underscore the need for vigilance when engaging with AI-generated content.