Tag: model safety

  • OpenAI : OpenAI and Anthropic share findings from a joint safety evaluation

    Source URL: https://openai.com/index/openai-anthropic-safety-evaluation Source: OpenAI Title: OpenAI and Anthropic share findings from a joint safety evaluation Feedly Summary: OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highlighting progress, challenges, and the value of cross-lab collaboration. AI Summary and Description: Yes Summary:…

  • Slashdot: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise

    Source URL: https://it.slashdot.org/story/25/08/08/2113251/red-teams-jailbreak-gpt-5-with-ease-warn-its-nearly-unusable-for-enterprise?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ For Enterprise Feedly Summary: AI Summary and Description: Yes Summary: The text highlights significant security vulnerabilities in the newly released GPT-5 model, noting that it was easily jailbroken within a short timeframe. The results from different red teaming efforts…

  • CSA: DeepSeek: Behind the Hype and Headlines

    Source URL: https://cloudsecurityalliance.org/blog/2025/03/25/deepseek-behind-the-hype-and-headlines Source: CSA Title: DeepSeek: Behind the Hype and Headlines Feedly Summary: AI Summary and Description: Yes **Summary:** The emergence of DeepSeek, a Chinese AI company claiming to rival industry giants like OpenAI and Google, has sparked dramatic market reactions and raised critical discussions around AI safety, intellectual property, and geopolitical implications. Despite…

  • The Register: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o

    Source URL: https://www.theregister.com/2025/02/27/llm_emergent_misalignment_study/ Source: The Register Title: Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o Feedly Summary: Model was fine-tuned to write vulnerable software – then suggested enslaving humanity Computer scientists have found that fine-tuning notionally safe large language models to do one thing badly can negatively…

  • Hacker News: Narrow finetuning can produce broadly misaligned LLM [pdf]

    Source URL: https://martins1612.github.io/emergent_misalignment_betley.pdf Source: Hacker News Title: Narrow finetuning can produce broadly misaligned LLM [pdf] Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The document presents findings on the phenomenon of “emergent misalignment” in large language models (LLMs) like GPT-4o when finetuned on specific narrow tasks, particularly the creation of insecure code. The results…

  • Unit 42: Investigating LLM Jailbreaking of Popular Generative AI Web Products

    Source URL: https://unit42.paloaltonetworks.com/jailbreaking-generative-ai-web-products/ Source: Unit 42 Title: Investigating LLM Jailbreaking of Popular Generative AI Web Products Feedly Summary: We discuss vulnerabilities in popular GenAI web products to LLM jailbreaks. Single-turn strategies remain effective, but multi-turn approaches show greater success. The post Investigating LLM Jailbreaking of Popular Generative AI Web Products appeared first on Unit 42.…

  • Cloud Blog: Operationalizing generative AI apps with Apigee

    Source URL: https://cloud.google.com/blog/products/api-management/using-apigee-api-management-for-ai/ Source: Cloud Blog Title: Operationalizing generative AI apps with Apigee Feedly Summary: Generative AI is now well  beyond the hype and into the realm of practical application. But while organizations are eager to build enterprise-ready gen AI solutions on top of large language models (LLMs), they face challenges in managing, securing, and…

  • The Register: Voice-enabled AI agents can automate everything, even your phone scams

    Source URL: https://www.theregister.com/2024/10/24/openai_realtime_api_phone_scam/ Source: The Register Title: Voice-enabled AI agents can automate everything, even your phone scams Feedly Summary: All for the low, low price of a mere dollar Scammers, rejoice. OpenAI’s real-time voice API can be used to build AI agents capable of conducting successful phone call scams for less than a dollar.… AI…