Microsoft warns of AI use by China to disrupt polls in India, US, South Korea
Though India has so far been among the least targeted areas, says the report, it has warned that China’s alleged misuse of AI may become more potent in time
Microsoft has warned that China will use content generated by artificial intelligence (AI) to influence the elections in India, the US, and South Korea to swing the outcome to its benefit. The warning is based on alleged similar use of AI by China during the Taiwan presidential elections in January 2024.
Election year
The Microsoft Threat Intelligence report, titled “Same targets, new playbooks: East Asia threat actors employ unique methods”, says, “…as populations in India, South Korea, and the United States head to the polls, we are likely to see Chinese cyber and influence actors, and to some extent North Korean cyber actors, work toward targeting these elections.”
The year 2024 is extremely significant as half the world’ population will vote to select their next government during these 12 months. At least 64 countries (and the European Union) are scheduled to hold their national elections this year, accounting for 49 per cent of the world population, a record number.
“With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests,” Microsoft said in a statement.
The targets
Though India has so far been among the least targeted areas, says the report, the threat has been there since last year, at least. Microsoft has warned that China’s alleged misuse of AI may become more potent in time.
According to Microsoft, “Chinese cyber actors broadly selected three target areas over the last seven months” — South Pacific Islands, regional adversaries in the South China Sea region, and the US defence industrial base.
For instance, Flax Typhoon, a Chinese cyber actor, which reportedly targeted entities related to US-Philippines military exercises, reportedly found targets in India, the Philippines, Hong Kong, and the US in late 2023 as well. “This actor also frequently attacks the telecommunications sector, often leading to many downstream effects,” says the report.
Storm-1376
According to Microsoft, “the most prolific of the actors using AI content is Storm-1376—Microsoft’s designation for the Chinese Communist Party (CCP)-linked actor commonly known as ‘Spamouflage’ or ‘Dragonbridge’.” Storm-1376 was particularly at work during the Taiwan presidential polls, the report says — “the first time that Microsoft Threat Intelligence has witnessed a nation-state actor using AI content in attempts to influence a foreign election”.
Storm-1376 reportedly posted fake audio clips — suspected to have been generated by AI — of Foxconn owner Terry Gou endorsing another presidential poll candidate on election day. Gou was an independent candidate in the polls and had bowed out of the contest in November 2023.
When it was confirmed that Gou had made no such statement, YouTube quickly pulled it down before a large number of users viewed it. Before that video emerged, a fake letter, purportedly from Gou, had circulated online endorsing that same candidate. Gou did not endorse any candidate and threatened to take legal action against such misinformation.
India has also been a victim of Storm-1376, which reportedly posted videos in English and Mandarin, using an AI-generated news anchor, to allege that the US and India were behind the unrest in Myanmar.
Audio, video, memes
From amplifying select negative news against a target nation to spreading outright falsehood, Storm-1376 does it all, claims the report. Apart from fake audio and video, much of it is done through memes, it says.
“The influence actors behind these campaigns have shown a willingness to both amplify AI-generated media that benefits their strategic narratives, as well as create their own video, memes, and audio content. Such tactics have been used in campaigns stoking divisions within the United States and exacerbating rifts in the Asia-Pacific region—including Taiwan, Japan, and South Korea,” says the report.
The seven-phase Lok Sabha elections in India are scheduled to begin on April 19, and the results are to be declared on June 4. In February, even before the schedule was announced, officials from ChatGPT-maker OpenAI met Election Commission of India (ECI) members and informed them of the steps being taken to ensure that AI is not misused in the polls. The ECI has also provided guidelines for identifying and responding to misinformation spread through deepfakes and similar means.