Tag: injections
-
The Register: AI frenzy continues as Macquarie commits up to $5B for Applied Digital datacenters
Source URL: https://www.theregister.com/2025/01/15/ai_macquarie_applied_digital/ Source: The Register Title: AI frenzy continues as Macquarie commits up to $5B for Applied Digital datacenters Feedly Summary: Bubble? What bubble? Fears of an AI bubble have yet to scare off venture capitalists and private equity firms from pumping billions of dollars into the GPU-packed datacenters at the heart of the…
-
The Register: Microsoft dangles $10K for hackers to hijack LLM email service
Source URL: https://www.theregister.com/2024/12/09/microsoft_llm_prompt_injection_challenge/ Source: The Register Title: Microsoft dangles $10K for hackers to hijack LLM email service Feedly Summary: Outsmart an AI, win a little Christmas cash Microsoft and friends have challenged AI hackers to break a simulated LLM-integrated email client with a prompt injection attack – and the winning teams will share a $10,000…
-
Embrace The Red: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection
Source URL: https://embracethered.com/blog/posts/2024/terminal-dillmas-prompt-injection-ansi-sequences/ Source: Embrace The Red Title: Terminal DiLLMa: LLM-powered Apps Can Hijack Your Terminal Via Prompt Injection Feedly Summary: Last week Leon Derczynski described how LLMs can output ANSI escape codes. These codes, also known as control characters, are interpreted by terminal emulators and modify behavior. This discovery resonates with areas I had…
-
Simon Willison’s Weblog: 0xfreysa/agent
Source URL: https://simonwillison.net/2024/Nov/29/0xfreysaagent/#atom-everything Source: Simon Willison’s Weblog Title: 0xfreysa/agent Feedly Summary: 0xfreysa/agent Freysa describes itself as “the world’s first adversarial agent game". On 22nd November they released an LLM-driven application which people could pay to message (using Ethereum), with access to tools that could transfer a prize pool to the message sender, ending the game.…
-
Blog | 0din.ai: Inyección de Prompts, el Camino a una Shell: Entorno de Contenedores de ChatGPT de OpenAI
Source URL: https://0din.ai/blog/inyeccion-de-prompts-el-camino-a-una-shell-entorno-de-contenedores-de-chatgpt-de-openai Source: Blog | 0din.ai Title: Inyección de Prompts, el Camino a una Shell: Entorno de Contenedores de ChatGPT de OpenAI Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses a blog exploring the boundaries of OpenAI’s ChatGPT container environment. It reveals unexpected capabilities allowing users to interact with the model’s…
-
Hacker News: The Beginner’s Guide to Visual Prompt Injections
Source URL: https://www.lakera.ai/blog/visual-prompt-injections Source: Hacker News Title: The Beginner’s Guide to Visual Prompt Injections Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses security vulnerabilities inherent in Large Language Models (LLMs), particularly focusing on visual prompt injections. As the reliance on models like GPT-4 increases for various tasks, concerns regarding the potential…
-
Schneier on Security: Prompt Injection Defenses Against LLM Cyberattacks
Source URL: https://www.schneier.com/blog/archives/2024/11/prompt-injection-defenses-against-llm-cyberattacks.html Source: Schneier on Security Title: Prompt Injection Defenses Against LLM Cyberattacks Feedly Summary: Interesting research: “Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks“: Large language models (LLMs) are increasingly being harnessed to automate cyberattacks, making sophisticated exploits more accessible and scalable. In response, we propose a new defense…