Tag: adversarial inputs

  • Schneier on Security: Jailbreaking LLM-Controlled Robots

    Source URL: https://www.schneier.com/blog/archives/2024/12/jailbreaking-llm-controlled-robots.html Source: Schneier on Security Title: Jailbreaking LLM-Controlled Robots Feedly Summary: Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions. AI Summary and Description: Yes Summary: The text highlights a significant vulnerability in LLM-controlled robots, revealing that they can be manipulated to bypass their safety protocols. This…

  • Simon Willison’s Weblog: 0xfreysa/agent

    Source URL: https://simonwillison.net/2024/Nov/29/0xfreysaagent/#atom-everything Source: Simon Willison’s Weblog Title: 0xfreysa/agent Feedly Summary: 0xfreysa/agent Freysa describes itself as “the world’s first adversarial agent game". On 22nd November they released an LLM-driven application which people could pay to message (using Ethereum), with access to tools that could transfer a prize pool to the message sender, ending the game.…

  • Hacker News: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

    Source URL: https://arxiv.org/abs/2310.03684 Source: Hacker News Title: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Feedly Summary: Comments AI Summary and Description: Yes Summary: This text presents “SmoothLLM,” an innovative algorithm designed to enhance the security of Large Language Models (LLMs) against jailbreaking attacks, which manipulate models into producing undesirable content. The proposal highlights a…