Nation-state threat actors are using generative AI tools to refine their attack techniques, but they aren’t yet using GenAI to create new attack vectors, according to a presentation at this week’s RSAC Conference that offered insight into how hackers are using GenAI tools.
“Our analysis shows that while AI is a useful tool for common tasks, we haven’t yet seen indications of adversaries developing any fundamentally new attack vectors with these models,” Sandra Joyce, VP for Google Threat Intelligence, told the RSAC 2025 Conference. “Ultimately attackers are using GenAI the way many of us are, as a productivity tool. They help to brainstorm, to refine their work, that sort of thing.”
The role of AI in cybersecurity was a key topic in well over 100 sessions at the annual RSAC Conference, which became independent from security vendor RSA in 2022 and rebranded as RSAC this year.
Iran, China and North Korea Threat Groups are Biggest GenAI Users
Joyce said APT groups from more than 20 countries accessed Google’s public Gemini GenAI services. Iranian threat actors were the heaviest users, and Google also saw “notable activity” from China and North Korea-linked threat actors.
Guardrails and security measures restricted adversarial capabilities, Joyce said, and more malicious requests generated safety responses from Gemini.
Threat actors are using Gemini’s GenAI capabilities for four attack phases in particular, she said. Those attack phases are:
- Reconnaissance
- Vulnerability research
- Malicious scripting
- Evasion techniques
“These are existing attack phases being made more efficient, not fundamentally new AI-driven attacks,” she said.
Joyce didn’t say how Google was able to correlate Gemini use with specific threat groups, but she gave several examples of how nation-state threat actors are using GenAI tools.
Iranian advanced persistent threat (APT) groups used Gemini to research “specific defense systems,” seeking information on topics such as unmanned aerial vehicles, jamming F-35 fighter jets, anti-drone systems, and Israel’s missile defense systems.
North Korean APT actors researched nuclear technology and power plants in South Korea, including location and information on the security status of specific plants.
Threat actors are also using GenAI for help with malware development. A North Korean APT group used Gemini for assistance developing code for sandbox evasion and to detect VM environments.
Threat groups are also using GenAI to develop phishing lures and campaigns, including seeking help with translation and localization, such as requests for “fluent specific colloquial English,” Joyce said. Developing personas to make phishing campaigns more convincing is another APT use.
GenAI Helps Cybersecurity Defenders Too
Joyce said a number of effective security use cases are also making GenAI useful to security teams. She cited vulnerability detection, incident workflows, malware analysis and fuzzing as some defensive GenAI use cases.
Also at the conference, Jeetu Patel, Cisco Executive Vice President and Chief Product Officer, unveiled the Foundation AI security model, an open source alerting and workflow Large Language Model (LLM) that was purpose-built for security.
The Foundation AI base model is currently available on Hugging Face, and a multi-step reasoning model will be released soon, Patel said.
Source: Read More