Hacker News: Moscow-based global news network has infected Western AI tools

Source URL: https://www.newsguardrealitycheck.com/p/a-well-funded-moscow-based-global
Source: Hacker News
Title: Moscow-based global news network has infected Western AI tools

Feedly Summary: Comments

AI Summary and Description: Yes

Summary: The text discusses a disinformation network, “Pravda,” that is manipulating AI chatbots by flooding them with false narratives and propaganda, resulting in a significant percentage of chatbot outputs containing disinformation. This manipulation threatens the integrity of AI-generated information and highlights the risk of foreign influence in AI models, particularly through the phenomenon known as “LLM grooming.”

Detailed Description:
The article outlines the extensive activities of a Moscow-based disinformation network called “Pravda,” which is deliberately infiltrating AI chatbots by feeding them substantial amounts of false claims and propaganda. Key insights from the report are as follows:

– **Disinformation Strategy**: The Pravda network’s strategy centers on impacting AI systems rather than targeting human audiences directly. By saturating the web with pro-Kremlin narratives, it aims to influence how AI models interpret and respond to current events in the news.

– **Audit Findings**: A thorough audit by NewsGuard revealed that 33% of responses from leading AI chatbots were based on false narratives propagated by the Pravda network. This included misleading claims about geopolitical events intended to shape perceptions and influence AI training datasets.

– **Scope of Operation**: The Pravda network has generated a staggering volume of content (approximately 3.6 million articles in 2024) across various languages and is strategically targeting numerous countries, effectively acting as a “laundering machine” for Kremlin propaganda.

– **LLM Grooming**: The technique termed “LLM grooming” involves saturating AI training datasets with biased content, which increases the chance that models will reproduce these narratives. This poses long-term risks to the accuracy and reliability of AI-generated information.

– **Audit Process**: The audit tested responses from 10 leading AI chatbots against provably false narratives associated with the Pravda network. Results showed a concerning tendency for these AI systems to either reinforce misinformation or provide non-responses when faced with these narratives.

– **Challenges in Filtering**: The article notes that simply blocking known Pravda domains from AI models is ineffective as the network continuously evolves and generates new sites. As Pravda aggregates information from various sources, AI reliance on web crawlers makes it susceptible to inadvertently ingesting unreliable information.

– **Broader Implications**: This situation is emblematic of a larger geopolitical struggle regarding influence within AI technologies, with Russian operatives reportedly asserting that they can reshape global narratives through AI. The report underscores a need for AI security and strategy to counteract foreign interference effectively.

The findings emphasize the need for heightened vigilance and robust security measures in AI systems to combat disinformation campaigns and safeguard the integrity of the information provided by these technologies.