top of page
Immagine del redattoreGabriele Iuvinale

Hackers from China, Russia, Iran use AI tools, Microsoft Says



According to Microsoft and OpenAI, hackers are using artificial intelligence (AI) systems like ChatGPT to improve their cyberattacks. In joint research published on February 14, the technology giant and AI company identified hacker groups linked to the Russian military intelligence, Iran’s Revolutionary Guard, as well as the governments of China and North Korea using AI tools to research their targets, improve scripts, and much more.


Foto GettyImages

Cybercrime groups and nation-state threat actors are actively exploring AI capabilities to carry out more sophisticated attacks, the research indicates, stressing the importance of strengthening and advancing security measures to combat malicious activities.


According to the research, cybercriminals that use ChatGPT share common tasks in their attacks, such as information gathering and coding assistance. They use linguistic tools to perform social engineering attacks tailored to work and professional environments, the research adds.


Miguel de Castro, a cybersecurity expert with U.S.-based cybersecurity company CrowdStrike, told Spanish media outlet Expansión on February 20 that each country has its unique approach. “China steals information from companies and governments; Russia focuses on geostrategic targets; Iran attacks universities and companies; and North Korea targets financial entities.”


“The use of artificial intelligence such as ChatGPT has become a new weapon in the cyber arsenal of states considered outlaws such as China, Russia, Iran, and North Korea, formalizing state-level crime and posing a serious threat to global security,” Jorge Serrano, a security expert and member of the team of advisors to Peru’s Congressional Intelligence Commission, told Diálogo on March 2.


“The malicious use of artificial intelligence such as ChatGPT has long been a concern for U.S. intelligence agencies.”

Microsoft tracks more than 300 hacking groups, including cybercriminals and nation-states. As part of the tech giant and OpenAI close partnership, they shared threat information on five hacking groups with ties to China, Russia, Iran, and North Korea and their use of OpenAI’s technology to conduct cyberattacks, subsequently shutting down their access.


Forest Blizzard

Forest Blizzard, linked to Russian military intelligence, was among the hacking group observed. Its activities span across the defense, transportation/logistics, energy, nongovernmental organizations, and information technology sectors.


This group is active in attacks on Ukraine, supporting Russian military objectives. It uses AI to understand satellite communication protocols and radar imaging technologies, posing a significant threat, Microsoft indicated.


“AI is like a scalpel in the hands of a surgeon; it can save lives, but if it falls into the wrong hands, it can become a lethal weapon in the hands of a criminal, threatening the security and privacy of people and nations,” Serrano said.


Salmon Typhoon

The joint study also observed the activities of the China state-affiliated threat actor Salmon Typhoon, which has a history of targeting defense contractors and government agencies. During 2023, this group showed interest in evaluating the effectiveness of large language models to obtain sensitive information, suggesting a broadening of its intelligence-gathering capabilities.


Other malicious actors observed included Charcoal Typhoon, backed by the Chinese government; Crimson Sandstorm, linked to Iran’s Revolutionary Guard; and Emerald Sleet, a North Korean group.


According to a CrowdStrike report, hackers linked to these countries are exploring new ways to employ generative AI such as ChatGPT in attacks targeting the United States, Israel, and several European nations.


2024 global elections

The CrowdStrike report highlights the increased use of AI tools, resulting in new opportunities for a variety of attackers, including those associated with Russia, China, Iran, and North Korea, who are now targeting the 2024 elections.


The United States, Mexico, and 58 other countries, will hold elections throughout 2024, Mexican magazine Expansión reported.


The authors of the report warn that because of the ease with which AI tools can generate persuasive but misleading narratives, it is highly likely that adversaries will use these tools to conduct disinformation operations in these elections, Time magazine reported.


Serrano expressed concern about the challenges for countries to protect themselves from state-sponsored attacks in their electoral processes. “With the advancement of AI, more sophisticated interventions are expected to influence electoral results and popular will.”


According to Serrano, there is a “need for a legal framework both in Latin America and globally, to combat AI abuses.” He stressed the importance of “governments strengthening their cyberwarfare capabilities to deal with the increase in attacks, which are expected to become more intense and damaging.”

6 visualizzazioni0 commenti

Comments


bottom of page