Tag: prompt injection attacks

  • The Register: Infosec hounds spot prompt injection vuln in Google Gemini apps

    Source URL: https://www.theregister.com/2025/08/08/infosec_hounds_spot_prompt_injection/ Source: The Register Title: Infosec hounds spot prompt injection vuln in Google Gemini apps Feedly Summary: Not a very smart home: crims could hijack smart-home boiler, open and close powered windows and more. Now fixed Black hat A trio of researchers has disclosed a major prompt injection vulnerability in Google’s Gemini large…

  • Embrace The Red: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed

    Source URL: https://embracethered.com/blog/posts/2025/amp-agents-that-modify-system-configuration-and-escape/ Source: Embrace The Red Title: Amp Code: Arbitrary Command Execution via Prompt Injection Fixed Feedly Summary: Sandbox-escape-style attacks can happen when an AI is able to modify its own configuration settings, such as by writing to configuration files. That was the case with Amp, an agentic coding tool built by Sourcegraph. The…

  • Embrace The Red: Turning ChatGPT Codex Into A ZombAI Agent

    Source URL: https://embracethered.com/blog/posts/2025/chatgpt-codex-remote-control-zombai/ Source: Embrace The Red Title: Turning ChatGPT Codex Into A ZombAI Agent Feedly Summary: Today we cover ChatGPT Codex as part of the Month of AI Bugs series. ChatGPT Codex is a cloud-based software engineering agent that answers codebase questions, executes code, and drafts pull requests. In particular, this post will demonstrate…

  • Cisco Talos Blog: Using LLMs as a reverse engineering sidekick

    Source URL: https://blog.talosintelligence.com/using-llm-as-a-reverse-engineering-sidekick/ Source: Cisco Talos Blog Title: Using LLMs as a reverse engineering sidekick Feedly Summary: LLMs may serve as powerful assistants to malware analysts to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis.  AI Summary and Description: Yes **Summary:** The text provides an in-depth analysis of using Large Language Models…

  • Wired: Hackers Are Finding New Ways to Hide Malware in DNS Records

    Source URL: https://arstechnica.com/security/2025/07/hackers-exploit-a-blind-spot-by-hiding-malware-inside-dns-records/ Source: Wired Title: Hackers Are Finding New Ways to Hide Malware in DNS Records Feedly Summary: Newly published research shows that the domain name system—a fundamental part of the web—can be exploited to hide malicious code and prompt injection attacks against chatbots. AI Summary and Description: Yes Summary: The text discusses the…

  • CSA: Copilot Studio: AIjacking Leads to Data Exfiltration

    Source URL: https://cloudsecurityalliance.org/articles/a-copilot-studio-story-2-when-aijacking-leads-to-full-data-exfiltration Source: CSA Title: Copilot Studio: AIjacking Leads to Data Exfiltration Feedly Summary: AI Summary and Description: Yes Summary: The text discusses significant vulnerabilities in AI agents, particularly focusing on prompt injection attacks that led to unauthorized access and exfiltration of sensitive data. It provides a case study involving a customer service agent…

  • The Register: Scholars sneaking phrases into papers to fool AI reviewers

    Source URL: https://www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/ Source: The Register Title: Scholars sneaking phrases into papers to fool AI reviewers Feedly Summary: Using prompt injections to play a Jedi mind trick on LLMs A handful of international computer science researchers appear to be trying to influence AI reviews with a new class of prompt injection attack.… AI Summary and…