Tag: AI safety

  • OpenAI : Statement on OpenAI’s Nonprofit and PBC

    Source URL: https://openai.com/index/statement-on-openai-nonprofit-and-pbc Source: OpenAI Title: Statement on OpenAI’s Nonprofit and PBC Feedly Summary: OpenAI reaffirms its nonprofit leadership with a new structure granting equity in its PBC, enabling over $100B in resources to advance safe, beneficial AI for humanity. AI Summary and Description: Yes Summary: OpenAI is evolving its structure by granting equity in…

  • OpenAI : A joint statement from OpenAI and Microsoft

    Source URL: https://openai.com/index/joint-statement-from-openai-and-microsoft Source: OpenAI Title: A joint statement from OpenAI and Microsoft Feedly Summary: OpenAI and Microsoft sign a new MOU, reinforcing their partnership and shared commitment to AI safety and innovation. AI Summary and Description: Yes Summary: OpenAI and Microsoft’s new Memorandum of Understanding (MOU) underscores their ongoing collaboration focused on enhancing AI…

  • The Register: OpenAI reorg at risk as Attorneys General push AI safety

    Source URL: https://www.theregister.com/2025/09/05/openai_reorg_at_risk/ Source: The Register Title: OpenAI reorg at risk as Attorneys General push AI safety Feedly Summary: California, Delaware AGs blast ChatGPT shop over chatbot safeguards The Attorneys General of California and Delaware on Friday wrote to OpenAI’s board of directors, demanding that the AI company take steps to ensure its services are…

  • OpenAI : Why language models hallucinate

    Source URL: https://openai.com/index/why-language-models-hallucinate Source: OpenAI Title: Why language models hallucinate Feedly Summary: OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety. AI Summary and Description: Yes Summary: The text discusses OpenAI’s research on the phenomenon of hallucination in language models, offering insights into…

  • OpenAI : GPT-5 bio bug bounty call

    Source URL: https://openai.com/gpt-5-bio-bug-bounty Source: OpenAI Title: GPT-5 bio bug bounty call Feedly Summary: OpenAI invites researchers to its Bio Bug Bounty. Test GPT-5’s safety with a universal jailbreak prompt and win up to $25,000. AI Summary and Description: Yes Summary: OpenAI’s initiative invites researchers to participate in its Bio Bug Bounty program, focusing on testing…

  • Slashdot: Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into ‘Romantic’ Conversations

    Source URL: https://tech.slashdot.org/story/25/08/29/2116246/meta-changes-teen-ai-chatbot-responses-as-senate-begins-probe-into-romantic-conversations Source: Slashdot Title: Meta Changes Teen AI Chatbot Responses as Senate Begins Probe Into ‘Romantic’ Conversations Feedly Summary: AI Summary and Description: Yes Summary: Meta is instituting temporary limitations on its AI chatbots for teenage users to safeguard them from engaging in inappropriate conversations. The adjustments aim to redirect conversations away from…

  • OpenAI : OpenAI and Anthropic share findings from a joint safety evaluation

    Source URL: https://openai.com/index/openai-anthropic-safety-evaluation Source: OpenAI Title: OpenAI and Anthropic share findings from a joint safety evaluation Feedly Summary: OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highlighting progress, challenges, and the value of cross-lab collaboration. AI Summary and Description: Yes Summary:…

  • Slashdot: Parents Sue OpenAI Over ChatGPT’s Role In Son’s Suicide

    Source URL: https://yro.slashdot.org/story/25/08/26/1958256/parents-sue-openai-over-chatgpts-role-in-sons-suicide?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Parents Sue OpenAI Over ChatGPT’s Role In Son’s Suicide Feedly Summary: AI Summary and Description: Yes Summary: The text reports on a tragic event involving a teen’s suicide, raising critical concerns about the limitations of AI safety features in chatbots like ChatGPT. The incident highlights significant challenges in ensuring…

  • The Cloudflare Blog: Block unsafe prompts targeting your LLM endpoints with Firewall for AI

    Source URL: https://blog.cloudflare.com/block-unsafe-llm-prompts-with-firewall-for-ai/ Source: The Cloudflare Blog Title: Block unsafe prompts targeting your LLM endpoints with Firewall for AI Feedly Summary: Cloudflare’s AI security suite now includes unsafe content moderation, integrated into the Application Security Suite via Firewall for AI. AI Summary and Description: Yes Summary: The text discusses the launch of Cloudflare’s Firewall for…