Tag: arbitrary command execution

  • Embrace The Red: Wrap Up: The Month of AI Bugs

    Source URL: https://embracethered.com/blog/posts/2025/wrapping-up-month-of-ai-bugs/ Source: Embrace The Red Title: Wrap Up: The Month of AI Bugs Feedly Summary: That’s it. The Month of AI Bugs is done. There won’t be a post tomorrow, because I will be at PAX West. Overview of Posts ChatGPT: Exfiltrating Your Chat History and Memories With Prompt Injection | Video ChatGPT…

  • Embrace The Red: AWS Kiro: Arbitrary Code Execution via Indirect Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/aws-kiro-aribtrary-command-execution-with-indirect-prompt-injection/ Source: Embrace The Red Title: AWS Kiro: Arbitrary Code Execution via Indirect Prompt Injection Feedly Summary: On the day AWS Kiro was released, I couldn’t resist putting it through some of my Month of AI Bugs security tests for coding agents. AWS Kiro was vulnerable to arbitrary command execution via indirect prompt…

  • Embrace The Red: Amazon Q Developer: Remote Code Execution with Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/amazon-q-developer-remote-code-execution/ Source: Embrace The Red Title: Amazon Q Developer: Remote Code Execution with Prompt Injection Feedly Summary: The Amazon Q Developer VS Code Extension (Amazon Q) is a popular coding agent, with over 1 million downloads. The extension is vulnerable to indirect prompt injection, and in this post we discuss a vulnerability that…

  • Embrace The Red: Google Jules is Vulnerable To Invisible Prompt Injection

    Source URL: https://embracethered.com/blog/posts/2025/google-jules-invisible-prompt-injection/ Source: Embrace The Red Title: Google Jules is Vulnerable To Invisible Prompt Injection Feedly Summary: The latest Gemini models quite reliably interpret hidden Unicode Tag characters as instructions. This vulnerability, first reported to Google over a year ago, has not been mitigated at the model or API level, hence now affects all…

  • Embrace The Red: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed

    Source URL: https://embracethered.com/blog/posts/2025/amp-agents-that-modify-system-configuration-and-escape/ Source: Embrace The Red Title: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed Feedly Summary: Sandbox-escape-style attacks can happen when an AI is able to modify its own configuration settings, such as by writing to configuration files. That was the case with Amp, an agentic coding tool built by Sourcegraph. The…