Tag: prompt injection attacks
-
Embrace The Red: AI Domination: Remote Controlling ChatGPT ZombAI Instances
Source URL: https://embracethered.com/blog/posts/2025/spaiware-and-chatgpt-command-and-control-via-prompt-injection-zombai/ Source: Embrace The Red Title: AI Domination: Remote Controlling ChatGPT ZombAI Instances Feedly Summary: At Black Hat Europe I did a fun presentation titled SpAIware and More: Advanced Prompt Injection Exploits. Without diving into the details of the entire talk, the key point I was making is that prompt injection can impact…
-
Simon Willison’s Weblog: Quoting Johann Rehberger
Source URL: https://simonwillison.net/2024/Dec/17/johann-rehberger/ Source: Simon Willison’s Weblog Title: Quoting Johann Rehberger Feedly Summary: Happy to share that Anthropic fixed a data leakage issue in the iOS app of Claude that I responsibly disclosed. 🙌 👉 Image URL rendering as avenue to leak data in LLM apps often exists in mobile apps as well — typically…
-
Simon Willison’s Weblog: Security ProbLLMs in xAI’s Grok: A Deep Dive
Source URL: https://simonwillison.net/2024/Dec/16/security-probllms-in-xais-grok/#atom-everything Source: Simon Willison’s Weblog Title: Security ProbLLMs in xAI’s Grok: A Deep Dive Feedly Summary: Security ProbLLMs in xAI’s Grok: A Deep Dive Adding xAI to the growing list of AI labs that shipped feature vulnerable to data exfiltration prompt injection attacks, but with the unfortunate addendum that they don’t seem to…
-
The Register: Microsoft dangles $10K for hackers to hijack LLM email service
Source URL: https://www.theregister.com/2024/12/09/microsoft_llm_prompt_injection_challenge/ Source: The Register Title: Microsoft dangles $10K for hackers to hijack LLM email service Feedly Summary: Outsmart an AI, win a little Christmas cash Microsoft and friends have challenged AI hackers to break a simulated LLM-integrated email client with a prompt injection attack – and the winning teams will share a $10,000…
-
Hacker News: Garak, LLM Vulnerability Scanner
Source URL: https://github.com/NVIDIA/garak Source: Hacker News Title: Garak, LLM Vulnerability Scanner Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text describes “garak,” a command-line vulnerability scanner specifically designed for large language models (LLMs). This tool aims to uncover various weaknesses in LLMs, such as hallucination, prompt injection attacks, and data leakage. Its development…
-
Simon Willison’s Weblog: Quoting Question for Department for Science, Innovation and Technology
Source URL: https://simonwillison.net/2024/Nov/1/prompt-injection/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Question for Department for Science, Innovation and Technology Feedly Summary: Lord Clement-Jones: To ask His Majesty’s Government what assessment they have made of the cybersecurity risks posed by prompt injection attacks to the processing by generative artificial intelligence of material provided from outside government, and whether…
-
The Register: Google reportedly developing an AI agent that can control your browser
Source URL: https://www.theregister.com/2024/10/28/google_ai_web_agent/ Source: The Register Title: Google reportedly developing an AI agent that can control your browser Feedly Summary: Project Jarvis will apparently conduct research, purchase products, and even book a flight on your behalf Google is reportedly looking to sidestep the complexity of AI-driven automation by letting its multimodal large language models (LLMs)…