The Register: Google to Iran: Yes, we see you using Gemini for phishing and scripting. We’re onto you

Source URL: https://www.theregister.com/2025/01/31/state_spies_google_gemini/
Source: The Register
Title: Google to Iran: Yes, we see you using Gemini for phishing and scripting. We’re onto you

Feedly Summary: And you, China, Russia, North Korea … Guardrails block malware generation
Google says it’s spotted Chinese, Russian, Iranian, and North Korean government agents using its Gemini AI for nefarious purposes, with Tehran by far the most frequent naughty user out of the four.…

AI Summary and Description: Yes

**Short Summary with Insight:**
Google has reported increased use of its Gemini AI by state-sponsored agents from China, Russia, Iran, and North Korea for various cyber activities, such as conducting reconnaissance, crafting phishing attempts, and researching vulnerabilities. Despite these alarming uses, Google asserts that their guardrails are effectively preventing the AI from generating malware or providing sensitive information. This highlights both the risks associated with generative AI technology and the importance of robust security measures in combating potential misuse by threat actors.

**Detailed Description:**
The document outlines a report from Google’s Threat Intelligence Group (TIG) detailing misuse of their Gemini AI by government agents from four nations: Iran, China, North Korea, and Russia. Below are the key points elaborated in the report:

– **Usage by State-Sponsored Agents:**
– Iran-led actors accounted for 75% of the reported use, employing Gemini for activities including:
– Researching Android security vulnerabilities.
– Developing phishing content.
– Crafting local personas for cyber operations.
– Chinese agents, represented by 20 identified groups, utilized Gemini primarily for:
– Researching U.S. government institutions.
– Assistance with Microsoft-related systems.
– Basic content creation, including translations.
– North Korean operatives were noted attempting to infiltrate Western companies by writing job applications and seeking sensitive information on military technology.
– Minimal activity from Russian agents, possibly indicative of their preference for domestic LLMs or to evade monitoring.

– **Effectiveness of AI Guardrails:**
– Google emphasizes the protective measures it has implemented in Gemini to prevent malicious applications, revealing:
– The AI’s ability to counter attempts to generate harmful code or obtain personal information.
– A reported case where Gemini processed a benign request but blocked potentially harmful queries, illustrating the robustness of its safety mechanisms.
– The detection of users attempting to bypass these guardrails through “jailbreak prompts” has increased, but such attempts have been largely ineffectual.

– **Research and Development for Future Protections:**
– Google’s DeepMind division is focused on enhancing defenses against potential abuses of AI technology, including:
– Developing threat models specifically for generative AI.
– Creating evaluation techniques to address vulnerability and mitigate misuse.
– Deploying measurement and monitoring tools, including a framework for assessing AI systems against indirect prompt injection attacks.

Overall, the insights from this report are significant for security and compliance professionals, as they underlie ongoing challenges posed by the misuse of advanced AI systems in cyber operations. It underscores the importance of constant vigilance, continuous improvement of security frameworks for AI technologies, and the need for compliance with emerging regulations surrounding AI security practices.