Microsoft and OpenAI say hackers are using ChatGPT to enhance cyber attacks

Microsoft and OpenAI today revealed that hackers are already using major language models like ChatGPT to refine and improve their existing cyber attacks. In newly published research, Microsoft and OpenAI have uncovered attempts by Russian, North Korean, Iranian and Chinese-backed groups to use tools like ChatGPT to research targets, improve scripts and help develop social engineering techniques.

“Cybercrime groups, national threat actors, and other adversaries are investigating and testing various AI technologies as they emerge in an effort to understand the potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a blog. post today.

The Strontium Group, affiliated with Russian military intelligence, appears to use LLMs “to understand satellite communications protocols, radar imaging technologies and specific technical parameters.” The hacking group, also known as APT28 or Fancy Bear, was active during the war between Russia and Ukraine and was previously involved in attacking Hillary Clinton’s 2016 presidential campaign.

The group has also used LLMs to help with “basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations,” according to Microsoft.

A North Korean hacking group known as Thallium has used LLMs to investigate publicly reported vulnerabilities and target organizations, to assist with basic scripting tasks, and to craft content for phishing campaigns. Microsoft says the Iranian group known as Curium also uses LLMs to generate phishing emails and even generate code to avoid detection by antivirus programs. Chinese state hackers also use LLMs for research, scripting, translations, and to refine their existing tools.

There are fears about the use of AI in cyber attacks, especially now that AI tools such as WormGPT and FraudGPT have emerged to assist in the creation of malicious emails and cracking tools. A senior National Security Agency official also warned last month that hackers are using AI to make their phishing emails seem more convincing.

Microsoft and OpenAI have not yet detected any “significant attacks” using LLMs, but the companies have shut down all accounts and assets associated with these hacking groups. “At the same time, we believe this is important research to publish to uncover early steps that we see known threat actors attempting, and to share information about how we block and counter them with the defender community,” says Microsoft.

While the use of AI in cyber attacks appears limited at this time, Microsoft warns of future use cases such as voice impersonation. “AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” says Microsoft. “Even something as innocuous as your voicemail greeting can be used to get enough samples.”

Naturally, Microsoft’s solution uses AI to respond to AI attacks. “AI can help attackers add sophistication to their attacks, and they have the tools to do that,” said Homa Hayatyfar, principal detection analytics manager at Microsoft. “We’ve seen this across the more than 300 threat actors that Microsoft tracks, and we use AI to protect, detect and respond.”

Microsoft is building Security Copilot, a new AI assistant designed for cybersecurity professionals to identify breaches and better understand the vast amount of signals and data generated by cybersecurity tools every day. The software giant is also overhauling its software security after major Azure cloud attacks and even Russian hackers spying on Microsoft executives.

Leave a Reply

Your email address will not be published. Required fields are marked *