Tag: prompt injection attacks
-
The Register: Anthropic’s latest Claude model can interact with computers – what could go wrong?
Source URL: https://www.theregister.com/2024/10/24/anthropic_claude_model_can_use_computers/ Source: The Register Title: Anthropic’s latest Claude model can interact with computers – what could go wrong? Feedly Summary: For starters, it could launch a prompt injection attack on itself… The latest version of AI startup Anthropic’s Claude 3.5 Sonnet model can use computers – and the developer makes it sound like…
-
Simon Willison’s Weblog: Quoting Model Card Addendum: Claude 3.5 Haiku and Upgraded Sonnet
Source URL: https://simonwillison.net/2024/Oct/23/model-card/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Model Card Addendum: Claude 3.5 Haiku and Upgraded Sonnet Feedly Summary: We enhanced the ability of the upgraded Claude 3.5 Sonnet and Claude 3.5 Haiku to recognize and resist prompt injection attempts. Prompt injection is an attack where a malicious user feeds instructions to a model…
-
Simon Willison’s Weblog: This prompt can make an AI chatbot identify and extract personal details from your chats
Source URL: https://simonwillison.net/2024/Oct/22/imprompter/#atom-everything Source: Simon Willison’s Weblog Title: This prompt can make an AI chatbot identify and extract personal details from your chats Feedly Summary: This prompt can make an AI chatbot identify and extract personal details from your chats Matt Burgess in Wired magazine writes about a new prompt injection / Markdown exfiltration variant…
-
Hacker News: Hacker plants false memories in ChatGPT to steal user data in perpetuity
Source URL: https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/ Source: Hacker News Title: Hacker plants false memories in ChatGPT to steal user data in perpetuity Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses a vulnerability discovered in ChatGPT that allowed for malicious manipulation of its long-term memory feature through prompt injection. While OpenAI has released a partial…