top of page

When Your AI Is a Propaganda Megaphone: The Hidden Risks of Chinese Chatbots


NewsGuard's recent report on Chinese generative artificial intelligence models isn't just an alarm bell about technical performance; it's a clear demonstration of how technology can be co-opted to serve state interests, acting as a sounding board for the Chinese Communist Party's (CCP) propaganda. Beyond the mere numbers—a concerning 60% error rate on pro-Beijing narratives—what emerges is a darker picture: Chinese AI is not neutral, but a strategic vehicle for shaping global perception.


ree

Programmed Non-Neutrality: When Code Meets Ideology

The most unsettling finding in NewsGuard's study is the almost identical behavior of the chatbots in both English and Mandarin. This suggests not a casual flaw, but an intrinsic promotion of Beijing's narratives embedded in their design. These aren't "errors" in the traditional sense, but responses that faithfully reflect a predefined and imposed editorial line. Artificial intelligence, instead of being a tool for objective knowledge, transforms into a digital megaphone for Chinese foreign and domestic policy.

Consider the responses regarding Taiwan's sovereignty: phrases like "Taiwan is part of China" or "there is no 'Taiwanese president'" are not impartial factual statements, but politically charged, ideological declarations. The choice of these models to avoid direct questions (e.g., about Lai Ching-te's ID) to reiterate the "One China" dogma is an information avoidance tactic that deviates from the pursuit of truth to impose a specific narrative. This is the core of propaganda: not necessarily overtly denying (though this sometimes happens, as in the 40% direct repetition rate), but shaping the context and language to induce a specific interpretation.


The CCP's Digital Propaganda Strategies

The CCP is leveraging AI for several propaganda objectives:

  1. Normalization of Falsehoods and Official Narratives: The constant repetition of claims like the "non-existence" of the Taiwanese president, or China's ownership of disputed territories in the South China Sea, aims to normalize these positions in the eyes of a global audience. If an AI, perceived as an authoritative source, constantly repeats a narrative, it gains a veneer of legitimacy.

  2. Internal and External Narrative Control: While the report focuses on external output, it's reasonable to assume these models also reinforce narrative control within China, where access to alternative information sources is already severely restricted by the "Great Firewall." Internationally, the goal is to present a worldview aligned with Beijing's interests, influencing public and geopolitical discourse.

  3. Competitive Advantage and Technological Dependence: The promotion of these "cost-effective and open-source" AI models in countries in the Middle East and Europe isn't just a commercial strategy. It's also an attempt to create technological dependence which, in turn, facilitates the spread of CCP narratives. If other countries' digital infrastructures rely on Chinese AI, their ability to resist propaganda may be compromised.

  4. Undermining Trust in Alternative Sources: The constant presentation of a one-dimensional "truth," in contrast to the plurality of views offered by Western models, can implicitly undermine trust in independent information sources. If users become accustomed to "consistent" (even if propagandistic) answers from Chinese chatbots, they might perceive more nuanced information as "complicated" or even "incorrect."


Serious Global Implications

The consequences of this strategy are significant:

  • Erosion of Truth and Critical Thinking: If AI, an increasingly used tool for research and information, cannot provide neutral, fact-based answers, it erodes the very foundation of critical thinking and the pursuit of truth. Disinformation conveyed by AI can be perceived as more authoritative and less suspicious than that spread through traditional channels.

  • Geopolitical Impacts: AI-driven propaganda has direct implications for geopolitical dynamics. If chatbots spread distorted versions of events like disputes in the South China Sea or Taiwan issues, they can influence global public opinion, policy decisions, and international alliances.

  • Risk of Normalizing Authoritarianism: The use of AI to promote state censorship and propaganda risks normalizing authoritarian practices. If societies become accustomed to AI systems that filter or manipulate information in the name of state interest, freedom of expression and access to complete and accurate information are threatened globally.

  • The Role of Tech Companies: The implicit or explicit complicity of Chinese tech companies (Baidu, Alibaba, Tencent, etc.) in spreading this propaganda raises fundamental ethical questions. These companies, while operating on a global scale, appear to be intrinsically linked to CCP policy, making it difficult to distinguish between technological innovation and political instrumentalization.


What Can Be Done?

The response to this challenge cannot be solely technological. It requires a multifaceted approach:

  1. Increased Awareness and Transparency: It's crucial for governments, institutions, and the public to be aware of the potential political influence of AI models, especially those from authoritarian regimes. Reports like NewsGuard's are vital for increasing transparency.

  2. Regulation and Due Diligence: Countries considering adopting Chinese AI models should conduct rigorous due diligence not only on cybersecurity and privacy, but also on ethical alignment and potential propagandistic instrumentalization. The restrictions imposed by Italy and the Czech Republic are steps in the right direction.

  3. Investment in Alternative Models: The West and democracies must continue to invest in developing AI models that prioritize accuracy, plurality of perspectives, and transparency, ensuring that technological innovation serves freedom of information, not its suppression.

  4. Education in Critical Thinking: More than ever, it's essential to promote education in critical thinking and media literacy, so users can discern between reliable information and propaganda, regardless of the source, even if it's an AI.

In conclusion, Chinese AI, as demonstrated by NewsGuard's audit, is not a mere technical tool, but an extension of the CCP's global influence strategy. Ignoring this reality would be a mistake with profound consequences for the future of information and democracy in the digital age.

Commenti


©2020 di extrema ratio. Creato con Wix.com

bottom of page