Tag: indirect prompt injection
-
Embrace The Red: Cross-Agent Privilege Escalation: When Agents Free Each Other
Source URL: https://embracethered.com/blog/posts/2025/cross-agent-privilege-escalation-agents-that-free-each-other/ Source: Embrace The Red Title: Cross-Agent Privilege Escalation: When Agents Free Each Other Feedly Summary: During the Month of AI Bugs, I described an emerging vulnerability pattern that shows how commonly agentic systems have a design flaw that allows an agent to overwrite its own configuration and security settings. This allows the…
-
Unit 42: The Risks of Code Assistant LLMs: Harmful Content, Misuse and Deception
Source URL: https://unit42.paloaltonetworks.com/code-assistant-llms/ Source: Unit 42 Title: The Risks of Code Assistant LLMs: Harmful Content, Misuse and Deception Feedly Summary: We examine security weaknesses in LLM code assistants. Issues like indirect prompt injection and model misuse are prevalent across platforms. The post The Risks of Code Assistant LLMs: Harmful Content, Misuse and Deception appeared first…
-
Embrace The Red: Wrap Up: The Month of AI Bugs
Source URL: https://embracethered.com/blog/posts/2025/wrapping-up-month-of-ai-bugs/ Source: Embrace The Red Title: Wrap Up: The Month of AI Bugs Feedly Summary: That’s it. The Month of AI Bugs is done. There won’t be a post tomorrow, because I will be at PAX West. Overview of Posts ChatGPT: Exfiltrating Your Chat History and Memories With Prompt Injection | Video ChatGPT…
-
Embrace The Red: AgentHopper: An AI Virus Research Project
Source URL: https://embracethered.com/blog/posts/2025/agenthopper-a-poc-ai-virus/ Source: Embrace The Red Title: AgentHopper: An AI Virus Research Project Feedly Summary: As part of the Month of AI Bugs, serious vulnerabilities that allow remote code execution via indirect prompt injection were discovered. There was a period of a few weeks where multiple arbitrary code execution vulnerabilities existed in popular agents,…
-
Schneier on Security: We Are Still Unable to Secure LLMs from Malicious Inputs
Source URL: https://www.schneier.com/blog/archives/2025/08/we-are-still-unable-to-secure-llms-from-malicious-inputs.html Source: Schneier on Security Title: We Are Still Unable to Secure LLMs from Malicious Inputs Feedly Summary: Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own…
-
Slashdot: Perplexity’s AI Browser Comet Vulnerable To Prompt Injection Attacks That Hijack User Accounts
Source URL: https://it.slashdot.org/story/25/08/25/1654220/perplexitys-ai-browser-comet-vulnerable-to-prompt-injection-attacks-that-hijack-user-accounts?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Perplexity’s AI Browser Comet Vulnerable To Prompt Injection Attacks That Hijack User Accounts Feedly Summary: AI Summary and Description: Yes Summary: The text highlights significant vulnerabilities in Perplexity’s Comet browser linked to its AI summarization functionalities. These vulnerabilities allow attackers to hijack user accounts and execute malicious commands, posing…
-
Embrace The Red: How Prompt Injection Exposes Manus’ VS Code Server to the Internet
Source URL: https://embracethered.com/blog/posts/2025/manus-ai-kill-chain-expose-port-vs-code-server-on-internet/ Source: Embrace The Red Title: How Prompt Injection Exposes Manus’ VS Code Server to the Internet Feedly Summary: Today we will cover a powerful, easy to use, autonomous agent called Manus. Manus is developed by the Chinese startup Monica, based in Singapore. This post demonstrates an end-to-end indirect prompt injection attack leading…