Tag: safety measures
-
Wired: Psychological Tricks Can Get AI to Break the Rules
Source URL: https://arstechnica.com/science/2025/09/these-psychological-tricks-can-get-llms-to-respond-to-forbidden-prompts/ Source: Wired Title: Psychological Tricks Can Get AI to Break the Rules Feedly Summary: Researchers convinced large language model chatbots to comply with “forbidden” requests using a variety of conversational tactics. AI Summary and Description: Yes Summary: The text discusses researchers’ exploration of conversational tactics used to manipulate large language model (LLM)…
-
The Register: OpenAI reorg at risk as Attorneys General push AI safety
Source URL: https://www.theregister.com/2025/09/05/openai_reorg_at_risk/ Source: The Register Title: OpenAI reorg at risk as Attorneys General push AI safety Feedly Summary: California, Delaware AGs blast ChatGPT shop over chatbot safeguards The Attorneys General of California and Delaware on Friday wrote to OpenAI’s board of directors, demanding that the AI company take steps to ensure its services are…
-
Slashdot: One Long Sentence is All It Takes To Make LLMs Misbehave
Source URL: https://slashdot.org/story/25/08/27/1756253/one-long-sentence-is-all-it-takes-to-make-llms-misbehave?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: One Long Sentence is All It Takes To Make LLMs Misbehave Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant security research finding from Palo Alto Networks’ Unit 42 regarding vulnerabilities in large language models (LLMs). The researchers explored methods that allow users to bypass…
-
OpenAI : OpenAI and Anthropic share findings from a joint safety evaluation
Source URL: https://openai.com/index/openai-anthropic-safety-evaluation Source: OpenAI Title: OpenAI and Anthropic share findings from a joint safety evaluation Feedly Summary: OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highlighting progress, challenges, and the value of cross-lab collaboration. AI Summary and Description: Yes Summary:…
-
The Cloudflare Blog: Block unsafe prompts targeting your LLM endpoints with Firewall for AI
Source URL: https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/ Source: The Cloudflare Blog Title: Block unsafe prompts targeting your LLM endpoints with Firewall for AI Feedly Summary: Cloudflare’s AI security suite now includes unsafe content moderation, integrated into the Application Security Suite via Firewall for AI. AI Summary and Description: Yes Summary: The text discusses the launch of Cloudflare’s Firewall for…
-
Unit 42: Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety
Source URL: https://unit42.paloaltonetworks.com/logit-gap-steering-impact/ Source: Unit 42 Title: Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety Feedly Summary: New research from Unit 42 on logit-gap steering reveals how internal alignment measures can be bypassed, making external AI security vital. The post Logit-Gap Steering: A New Frontier in Understanding and Probing LLM Safety appeared…