Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyber Attacks

Nation-state actors linked to Russia, North Korea, Iran, and China are exploring the integration of artificial intelligence (AI) and large language models (LLMs) to enhance their existing cyber attack operations.

The revelations are detailed in a report jointly released by Microsoft and OpenAI, both of whom disclosed thwarting attempts by five state-affiliated actors that utilized their AI services for malicious cyber activities, subsequently terminating associated assets and accounts.

In a report shared, Microsoft highlighted, “Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships.”

While no significant or innovative attacks leveraging LLMs have been identified so far, adversarial exploration of AI technologies has manifested across various phases of the attack chain, including reconnaissance, coding assistance, and malware development.

“These actors primarily aimed to leverage OpenAI services for querying open-source information, translating, identifying coding errors, and executing basic coding tasks,” noted the AI firm.

For instance, the Russian nation-state group known as Forest Blizzard (aka APT28) reportedly utilized OpenAI services for open-source research into satellite communication protocols and radar imaging technology, along with assistance in scripting tasks.

Other notable hacking groups mentioned in the report include:

  1. Emerald Sleet (aka Kimusky) – A North Korean threat actor employing LLMs to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region. They also utilized LLMs for basic scripting tasks and drafting content for potential use in phishing campaigns.

  2. Crimson Sandstorm (aka Imperial Kitten) – An Iranian threat actor that used LLMs to generate code snippets related to app and web development, create phishing emails, and research common methods for malware evasion.

  3. Charcoal Typhoon (aka Aquatic Panda) – A Chinese threat actor utilizing LLMs for researching companies and vulnerabilities, generating scripts, creating content for potential use in phishing campaigns, and identifying techniques for post-compromise behavior.

  4. Salmon Typhoon (aka Maverick Panda) – Another Chinese threat actor leveraging LLMs for translating technical papers, retrieving publicly available information on intelligence agencies and regional threat actors, resolving coding errors, and finding tactics to evade detection.

Microsoft also announced the development of a set of principles aimed at mitigating the risks associated with the malicious use of AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates. These principles include identifying and taking action against malicious threat actors, notifying other AI service providers, collaborating with stakeholders, and ensuring transparency, according to Microsoft.