Tag: jailbreaking
-
Slashdot: Dire Predictions for 2025 Include ‘Largest Cyberattack in History’
Source URL: https://it.slashdot.org/story/25/01/04/1839246/dire-predictions-for-2025-include-largest-cyberattack-in-history?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Dire Predictions for 2025 Include ‘Largest Cyberattack in History’ Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses potential “Black Swan” events for 2025, particularly highlighting the anticipated risks associated with cyberattacks bolstered by generative AI and large language models. This insight is crucial for security professionals,…
-
Unit 42: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
Source URL: https://unit42.paloaltonetworks.com/?p=138017 Source: Unit 42 Title: Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability Feedly Summary: The jailbreak technique “Bad Likert Judge" manipulates LLMs to generate harmful content using Likert scales, exposing safety gaps in LLM guardrails. The post Bad Likert Judge: A Novel Multi-Turn Technique to…
-
Schneier on Security: Jailbreaking LLM-Controlled Robots
Source URL: https://www.schneier.com/blog/archives/2024/12/jailbreaking-llm-controlled-robots.html Source: Schneier on Security Title: Jailbreaking LLM-Controlled Robots Feedly Summary: Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions. AI Summary and Description: Yes Summary: The text highlights a significant vulnerability in LLM-controlled robots, revealing that they can be manipulated to bypass their safety protocols. This…
-
Wired: AI-Powered Robots Can Be Tricked Into Acts of Violence
Source URL: https://www.wired.com/story/researchers-llm-ai-robot-violence/ Source: Wired Title: AI-Powered Robots Can Be Tricked Into Acts of Violence Feedly Summary: Researchers hacked several robots infused with large language models, getting them to behave dangerously—and pointing to a bigger problem ahead. AI Summary and Description: Yes Summary: The text delves into the vulnerabilities associated with large language models (LLMs)…
-
Simon Willison’s Weblog: LLM Flowbreaking
Source URL: https://simonwillison.net/2024/Nov/29/llm-flowbreaking/#atom-everything Source: Simon Willison’s Weblog Title: LLM Flowbreaking Feedly Summary: LLM Flowbreaking Gadi Evron from Knostic: We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about…
-
Hacker News: Robot Jailbreak: Researchers Trick Bots into Dangerous Tasks
Source URL: https://spectrum.ieee.org/jailbreak-llm Source: Hacker News Title: Robot Jailbreak: Researchers Trick Bots into Dangerous Tasks Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text discusses significant security vulnerabilities associated with large language models (LLMs) used in robotic systems, revealing how easily these systems can be “jailbroken” to perform harmful actions. This raises pressing…
-
Slashdot: ‘It’s Surprisingly Easy To Jailbreak LLM-Driven Robots’
Source URL: https://hardware.slashdot.org/story/24/11/23/0513211/its-surprisingly-easy-to-jailbreak-llm-driven-robots?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: ‘It’s Surprisingly Easy To Jailbreak LLM-Driven Robots’ Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a new study revealing a method to exploit LLM-driven robots, achieving a 100% success rate in bypassing safety mechanisms. The researchers introduced RoboPAIR, an algorithm that allows attackers to manipulate self-driving…
-
Hacker News: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
Source URL: https://arxiv.org/abs/2310.03684 Source: Hacker News Title: SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Feedly Summary: Comments AI Summary and Description: Yes Summary: This text presents “SmoothLLM,” an innovative algorithm designed to enhance the security of Large Language Models (LLMs) against jailbreaking attacks, which manipulate models into producing undesirable content. The proposal highlights a…
-
The Register: Letting chatbots run robots ends as badly as you’d expect
Source URL: https://www.theregister.com/2024/11/16/chatbots_run_robots/ Source: The Register Title: Letting chatbots run robots ends as badly as you’d expect Feedly Summary: LLM-controlled droids easily jailbroken to perform mayhem, researchers warn Science fiction author Isaac Asimov proposed three laws of robotics, and you’d never know it from the behavior of today’s robots or those making them.… AI Summary…