Tag: safety

  • Cloud Blog: SandboxAQ: Accelerating drug discovery through cloud integration

    Source URL: https://cloud.google.com/blog/products/infrastructure-modernization/sandboxaq-speeds-up-drug-discovery-with-the-cloud/ Source: Cloud Blog Title: SandboxAQ: Accelerating drug discovery through cloud integration Feedly Summary: The traditional drug discovery process involves massive capital investments, prolonged timelines, and is plagued with daunting failure rates. From initial research to obtaining regulatory approval, bringing a new drug to market can take decades. During this time, many drug…

  • Business Wire: New Cloud Security Alliance Certification Program Equips Professionals With Skills to Ensure Responsible and Safe Development and Management of Artificial Intelligence (AI)

    Source URL: https://www.businesswire.com/news/home/20250428414583/en/New-Cloud-Security-Alliance-Certification-Program-Equips-Professionals-With-Skills-to-Ensure-Responsible-and-Safe-Development-and-Management-of-Artificial-Intelligence-AI Source: Business Wire Title: New Cloud Security Alliance Certification Program Equips Professionals With Skills to Ensure Responsible and Safe Development and Management of Artificial Intelligence (AI) Feedly Summary: New Cloud Security Alliance Certification Program Equips Professionals With Skills to Ensure Responsible and Safe Development and Management of Artificial Intelligence (AI) AI Summary…

  • Cloud Blog: From insight to action: M-Trends, agentic AI, and how we’re boosting defenders at RSAC 2025

    Source URL: https://cloud.google.com/blog/products/identity-security/from-insight-to-action-m-trends-agentic-ai-and-how-were-boosting-defenders-at-rsac-2025/ Source: Cloud Blog Title: From insight to action: M-Trends, agentic AI, and how we’re boosting defenders at RSAC 2025 Feedly Summary: Cybersecurity is facing a unique moment, where AI-enhanced threat intelligence, products, and services are poised to give defenders an advantage over the threats they face that’s proven elusive — until now.  …

  • CSA: What Is the New Trusted AI Safety Knowledge Certification?

    Source URL: https://cloudsecurityalliance.org/articles/why-we-re-launching-a-trusted-ai-safety-knowledge-certification-program Source: CSA Title: What Is the New Trusted AI Safety Knowledge Certification? Feedly Summary: AI Summary and Description: Yes Summary: The provided text discusses the introduction of the Trusted AI Safety Knowledge certification program developed by the Cloud Security Alliance and Northeastern University. It emphasizes the importance of AI safety and security…

  • CSA: Understanding Zero Trust Security Models

    Source URL: https://cloudsecurityalliance.org/articles/understanding-zero-trust-security-models-a-beginners-guide Source: CSA Title: Understanding Zero Trust Security Models Feedly Summary: AI Summary and Description: Yes Summary: The text provides an in-depth exploration of Zero Trust Security Models, emphasizing their relevance in the contemporary cybersecurity landscape. As cyber threats evolve, adopting a Zero Trust approach becomes essential for organizations looking to safeguard their…

  • Microsoft Security Blog: New whitepaper outlines the taxonomy of failure modes in AI agents

    Source URL: https://www.microsoft.com/en-us/security/blog/2025/04/24/new-whitepaper-outlines-the-taxonomy-of-failure-modes-in-ai-agents/ Source: Microsoft Security Blog Title: New whitepaper outlines the taxonomy of failure modes in AI agents Feedly Summary: Read the new whitepaper from the Microsoft AI Red Team to better understand the taxonomy of failure mode in agentic AI. The post New whitepaper outlines the taxonomy of failure modes in AI agents…

  • Schneier on Security: Regulating AI Behavior with a Hypervisor

    Source URL: https://www.schneier.com/blog/archives/2025/04/regulating-ai-behavior-with-a-hypervisor.html Source: Schneier on Security Title: Regulating AI Behavior with a Hypervisor Feedly Summary: Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.” Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a…

  • The Register: Ex-NSA chief warns AI devs: Don’t repeat infosec’s early-day screwups

    Source URL: https://www.theregister.com/2025/04/23/exnsa_boss_ai/ Source: The Register Title: Ex-NSA chief warns AI devs: Don’t repeat infosec’s early-day screwups Feedly Summary: Bake in security now or pay later, says Mike Rogers AI engineers should take a lesson from the early days of cybersecurity and bake safety and security into their models during development, rather than trying to…