Tag: jailbreaking

  • Unit 42: Investigating LLM Jailbreaking of Popular Generative AI Web Products

    Source URL: https://unit42.paloaltonetworks.com/jailbreaking-generative-ai-web-products/ Source: Unit 42 Title: Investigating LLM Jailbreaking of Popular Generative AI Web Products Feedly Summary: We discuss vulnerabilities in popular GenAI web products to LLM jailbreaks. Single-turn strategies remain effective, but multi-turn approaches show greater success. The post Investigating LLM Jailbreaking of Popular Generative AI Web Products appeared first on Unit 42.…

  • Slashdot: Anthropic Makes ‘Jailbreak’ Advance To Stop AI Models Producing Harmful Results

    Source URL: https://slashdot.org/story/25/02/03/1810255/anthropic-makes-jailbreak-advance-to-stop-ai-models-producing-harmful-results?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Anthropic Makes ‘Jailbreak’ Advance To Stop AI Models Producing Harmful Results Feedly Summary: AI Summary and Description: Yes Summary: Anthropic has introduced a new technique called “constitutional classifiers” designed to enhance the security of large language models (LLMs) like its Claude chatbot. This system aims to mitigate risks associated…

  • Simon Willison’s Weblog: Constitutional Classifiers: Defending against universal jailbreaks

    Source URL: https://simonwillison.net/2025/Feb/3/constitutional-classifiers/ Source: Simon Willison’s Weblog Title: Constitutional Classifiers: Defending against universal jailbreaks Feedly Summary: Constitutional Classifiers: Defending against universal jailbreaks Interesting new research from Anthropic, resulting in the paper Constitutional Classifiers: Defending against Universal Jailbreaks across Thousands of Hours of Red Teaming. From the paper: In particular, we introduce Constitutional Classifiers, a framework…

  • Hacker News: Constitutional Classifiers: Defending against universal jailbreaks

    Source URL: https://www.anthropic.com/research/constitutional-classifiers Source: Hacker News Title: Constitutional Classifiers: Defending against universal jailbreaks Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses a novel approach by the Anthropic Safeguards Research Team to defend AI models against jailbreaks through the use of Constitutional Classifiers. This system demonstrates robustness against various jailbreak techniques while…

  • Unit 42: Recent Jailbreaks Demonstrate Emerging Threat to DeepSeek

    Source URL: https://unit42.paloaltonetworks.com/?p=138180 Source: Unit 42 Title: Recent Jailbreaks Demonstrate Emerging Threat to DeepSeek Feedly Summary: Evaluation of three jailbreaking techniques on DeepSeek shows risks of generating prohibited content. The post Recent Jailbreaks Demonstrate Emerging Threat to DeepSeek appeared first on Unit 42. AI Summary and Description: Yes Summary: The text outlines the research conducted…

  • Simon Willison’s Weblog: How we estimate the risk from prompt injection attacks on AI systems

    Source URL: https://simonwillison.net/2025/Jan/29/prompt-injection-attacks-on-ai-systems/ Source: Simon Willison’s Weblog Title: How we estimate the risk from prompt injection attacks on AI systems Feedly Summary: How we estimate the risk from prompt injection attacks on AI systems The “Agentic AI Security Team" at Google DeepMind share some details on how they are researching indirect prompt injection attacks. They…

  • Hacker News: 1,156 Questions Censored by DeepSeek

    Source URL: https://www.promptfoo.dev/blog/deepseek-censorship/ Source: Hacker News Title: 1,156 Questions Censored by DeepSeek Feedly Summary: Comments AI Summary and Description: Yes **Summary**: The text discusses the DeepSeek-R1 model, highlighting its prominence and the associated concerns regarding censorship driven by CCP policies. It emphasizes the model’s high refusal rate on sensitive topics in China, the methods to…

  • Simon Willison’s Weblog: ChatGPT Operator system prompt

    Source URL: https://simonwillison.net/2025/Jan/26/chatgpt-operator-system-prompt/#atom-everything Source: Simon Willison’s Weblog Title: ChatGPT Operator system prompt Feedly Summary: ChatGPT Operator system prompt Johann Rehberger snagged a copy of the ChatGPT Operator system prompt. As usual, the system prompt doubles as better written documentation than any of the official sources. It asks users for confirmation a lot: ## Confirmations Ask…

  • Slashdot: Dire Predictions for 2025 Include ‘Largest Cyberattack in History’

    Source URL: https://it.slashdot.org/story/25/01/04/1839246/dire-predictions-for-2025-include-largest-cyberattack-in-history?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Dire Predictions for 2025 Include ‘Largest Cyberattack in History’ Feedly Summary: AI Summary and Description: Yes **Summary:** The text discusses potential “Black Swan” events for 2025, particularly highlighting the anticipated risks associated with cyberattacks bolstered by generative AI and large language models. This insight is crucial for security professionals,…