US research group says AI anchors are bolstering Beijing’s propaganda and badmouthing Washington
China
G Iuvinale
The New York-based Graphika said in the report ‘Deepfake It Till You Make It’ that Spamouflage, a China state-aligned influence operation, has been using AI-generated fictitious people to promote China’s global role and spreading disinformation against the United States since late 2022.
AI-generated people acting as ‘news anchors’ in Wolf News videos. Photo Graphika.
In late 2022, Graphika observed limited instances of Spamouflage, a pro-Chinese influence
operation (IO), promoting content that included video footage offictitious people almost
certainly created using artificial intelligence techniques.
While a range of IO actors increasingly use AI-generated images or manipulated media in their campaigns, this was thefirst time we observed a state-aligned operation promoting video footage of AI-generatedfictitious people.
The AI-generated footage was almost certainly produced using an “AI video creation platform” operated by a commercial company in the United Kingdom. The company offers its services for customers to create marketing or training videos and says “political [...] content is not tolerated or approved.”
Despite featuring lifelike AI-generated avatars, the Spamouflage videos we reviewed were
low-quality and spammy in nature. Additionally, none of the identified Spamouflage videos
received more than 300 views, reflecting this actor’s long-standing challenges in producing
convincing political content that generates authentic online engagement.
We believe the use of commercially-available AI products will allow IO actors to create
increasingly high-quality deceptive content at greater scale and speed. In the weeks since we identified the activity described in this report, we have seen other actors move quickly to
adopt the exact same tactics. Most recently, this involved unidentified actors using the same AI tools to create videos targeting online conversations in Burkina Faso.
Comments