Tag: safety evaluations

  • Slashdot: Anthropic Revokes OpenAI’s Access To Claude Over Terms of Service Violation

    Source URL: https://developers.slashdot.org/story/25/08/01/2237220/anthropic-revokes-openais-access-to-claude-over-terms-of-service-violation?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Anthropic Revokes OpenAI’s Access To Claude Over Terms of Service Violation Feedly Summary: AI Summary and Description: Yes Summary: The text discusses Anthropic revoking OpenAI’s API access due to violations of terms of service, emphasizing the competitive dynamics within AI development. This situation highlights the importance of compliance with…

  • OpenAI : Deep research System Card

    Source URL: https://openai.com/index/deep-research-system-card Source: OpenAI Title: Deep research System Card Feedly Summary: This report outlines the safety work carried out prior to releasing deep research including external red teaming, frontier risk evaluations according to our Preparedness Framework, and an overview of the mitigations we built in to address key risk areas. AI Summary and Description:…

  • OpenAI : OpenAI o3-mini System Card

    Source URL: https://openai.com/index/o3-mini-system-card Source: OpenAI Title: OpenAI o3-mini System Card Feedly Summary: This report outlines the safety work carried out for the OpenAI o3-mini model, including safety evaluations, external red teaming, and Preparedness Framework evaluations. AI Summary and Description: Yes Summary: The text discusses safety work related to the OpenAI o3-mini model, emphasizing safety evaluations…

  • Hacker News: O3-mini System Card [pdf]

    Source URL: https://cdn.openai.com/o3-mini-system-card.pdf Source: Hacker News Title: O3-mini System Card [pdf] Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The OpenAI o3-mini System Card details the advanced capabilities, safety evaluations, and risk classifications of the OpenAI o3-mini model. This document is particularly pertinent for professionals in AI security, as it outlines significant safety measures…

  • The Register: Microsoft catapults DeepSeek R1 into Azure AI Foundry, GitHub

    Source URL: https://www.theregister.com/2025/01/30/microsoft_deepseek_azure_github/ Source: The Register Title: Microsoft catapults DeepSeek R1 into Azure AI Foundry, GitHub Feedly Summary: A distilled version for Copilot+ PCs is on the way Microsoft has added DeepSeek R1 to Azure AI Foundry and GitHub, showing that even a lumbering tech giant can be nimble when it needs to be.… AI…

  • Hacker News: DeepSeek R1 Is Now Available on Azure AI Foundry and GitHub

    Source URL: https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-available-on-azure-ai-foundry-and-github/ Source: Hacker News Title: DeepSeek R1 Is Now Available on Azure AI Foundry and GitHub Feedly Summary: Comments AI Summary and Description: Yes **Summary:** The text discusses the availability of DeepSeek R1 in the Azure AI Foundry model catalog, emphasizing the model’s integration into a trusted and scalable platform for businesses. It…

  • OpenAI : Operator System Card

    Source URL: https://openai.com/index/operator-system-card Source: OpenAI Title: Operator System Card Feedly Summary: Drawing from OpenAI’s established safety frameworks, this document highlights our multi-layered approach, including model and product mitigations we’ve implemented to protect against prompt engineering and jailbreaks, protect privacy and security, as well as details our external red teaming efforts, safety evaluations, and ongoing work…