Source URL: https://cloud.google.com/blog/products/identity-security/introducing-ai-protection-security-for-the-ai-era/
Source: Cloud Blog
Title: Announcing AI Protection: Security for the AI era
Feedly Summary: As AI use increases, security remains a top concern, and we often hear that organizations are worried about risks that can come with rapid adoption. Google Cloud is committed to helping our customers confidently build and deploy AI in a secure, compliant, and private manner.
Today, we’re introducing a new solution that can help you mitigate risk throughout the AI lifecycle. We are excited to announce AI Protection, a set of capabilities designed to safeguard AI workloads and data across clouds and models — irrespective of the platforms you choose to use.
AI Protection helps teams comprehensively manage AI risk by:
Discovering AI inventory in your environment and assessing it for potential vulnerabilities
Securing AI assets with controls, policies, and guardrails
Managing threats against AI systems with detection, investigation, and response capabilities
How to build and deploy AI securely with AI Protection
AI Protection is integrated with Security Command Center (SCC), our multicloud risk-management platform, so that security teams can get a centralized view of their AI posture and manage AI risks holistically in context with their other cloud risks.
AI Protection helps organizations discover AI inventory, secure AI assets, and manage AI threats, and is integrated with Security Command Center.
Discovering AI inventory
Effective AI risk management begins with a comprehensive understanding of where and how AI is used within your environment. Our capabilities help you automatically discover and catalog AI assets, including the use of models, applications, and data — and their relationships.
Understanding what data supports AI applications and how it’s currently protected is paramount. Sensitive Data Protection (SDP) now extends automated data discovery to Vertex AI datasets to help you understand data sensitivity and data types that make up training and tuning data. It can also generate data profiles that provide deeper insight into the type and sensitivity of your training data.
Once you know where sensitive data exists, AI Protection can use Security Command Center’s virtual red teaming to identify AI-related toxic combinations and potential paths that threat actors could take to compromise this critical data, and recommend steps to remediate vulnerabilities and make posture adjustments.
aside_block
Securing AI assets
Model Armor, a core capability of AI Protection, is now generally available. It guards against prompt injection, jailbreak, data loss, malicious URLs, and offensive content. Model Armor can support a broad range of models across multiple clouds, so customers get consistent protection for the models and platforms they want to use — even if that changes in the future.
Model Armor provides multi-model, multicloud support for generative AI applications.
Today, developers can easily integrate Model Armor’s prompt and response screening into applications using a REST API or through an integration with Apigee. The ability to deploy Model Armor in-line without making any app changes is coming soon through integrations with Vertex AI and our Cloud Networking products.
“We are using Model Armor not only because it provides robust protection against prompt injections, jailbreaks, and sensitive data leaks, but also because we’re getting a unified security posture from Security Command Center. We can quickly identify, prioritize, and respond to potential vulnerabilities — without impacting the experience of our development teams or the apps themselves. We view Model Armor as critical to safeguarding our AI applications and being able to centralize the monitoring of AI security threats alongside our other security findings within SCC is a game-changer," said Jay DePaul, chief cybersecurity and technology risk officer, Dun & Bradstreet.
Organizations can use AI Protection to strengthen the security of Vertex AI applications by applying postures in Security Command Center. These posture controls, designed with first-party knowledge of the Vertex AI architecture, define secure resource configurations and help organizations prevent drift or unauthorized changes.
Managing AI threats
AI Protection operationalizes security intelligence and research from Google and Mandiant to help defend your AI systems. Detectors in Security Command Center can be used to uncover initial access attempts, privilege escalation, and persistence attempts for AI workloads. New detectors to AI Protection based on the latest frontline intelligence to help identify and manage runtime threats such as foundational model hijacking are coming soon.
"As AI-driven solutions become increasingly commonplace, securing AI systems is paramount and surpasses basic data protection. AI security — by its nature — necessitates a holistic strategy that includes model integrity, data provenance, compliance, and robust governance,” said Dr. Grace Trinidad, research director, IDC.
“Piecemeal solutions can leave and have left critical vulnerabilities exposed, rendering organizations susceptible to threats like adversarial attacks or data poisoning, and added to the overwhelm experienced by security teams. A comprehensive, lifecycle-focused approach allows organizations to effectively mitigate the multi-faceted risks surfaced by generative AI, as well as manage increasingly expanding security workloads. By taking a holistic approach to AI protection, Google Cloud simplifies and thus improves the experience of securing AI for customers," she said.
Complement AI Protection with frontline expertise
The Mandiant AI Security Consulting Portfolio offers services to help organizations assess and implement robust security measures for AI systems across clouds and platforms. Consultants can evaluate the end-to-end security of AI implementations and recommend opportunities to harden AI systems. We also provide red teaming for AI, informed by the latest attacks on AI services seen in frontline engagements.
Building on a secure foundation
Customers can also benefit from using Google Cloud’s infrastructure for building and running AI workloads. Our secure-by-design, secure-by-default cloud platform is built with multiple layers of safeguards, encryption, and rigorous software supply chain controls.
For customers whose AI workloads are subject to regulation, we offer Assured Workloads to easily create controlled environments with strict policy guardrails that enforce controls such as data residency and customer-managed encryption. Audit Manager can produce evidence of regulatory and emerging AI standards compliance. Confidential Computing can help ensure data remains protected throughout the entire processing pipeline, reducing the risk of unauthorized access, even by privileged users or malicious actors within the system.
Additionally, for organizations looking to discover unsanctioned use of AI, or shadow AI, in their workforce, Chrome Enterprise Premium can provide visibility into end-user activity as well as prevent accidental and intentional exfiltration of sensitive data in gen AI applications.
Next steps
Google Cloud is committed to helping your organization protect its AI innovations. Read more in this showcase paper from Enterprise Strategy Group and attend our upcoming online Security Talks event on March 12.
To evaluate AI Protection in Security Command Center and explore subscription options, please contact a Google Cloud sales representative or authorized Google Cloud partner.
More exciting capabilities are coming soon and we will be sharing in-depth details on AI Protection and how Google Cloud can help you securely develop and deploy AI solutions at Google Cloud Next in Las Vegas, April 9 to 11.
AI Summary and Description: Yes
**Summary:** The text outlines Google Cloud’s new AI Protection solution, which aims to address security risks associated with the rapid adoption of AI technologies. It encompasses features for discovering AI assets, securing them against various threats, and managing risks holistically within a cloud environment. This comprehensive approach reflects the ongoing need for organizations to integrate AI securely into their operations, aligning security Cloud controls with AI system requirements.
**Detailed Description:**
The emergence of AI technologies has heightened security concerns among organizations, prompting Google Cloud to introduce AI Protection. This solution is designed to help businesses navigate the risks associated with AI adoption while ensuring compliance and privacy.
– **AI Protection Capabilities:**
– **Discovery of AI Inventory:**
– Automatically catalog AI assets and their relationships.
– Integrates with Sensitive Data Protection (SDP) to understand data sensitivity, particularly in datasets used for training and tuning.
– Utilizes virtual red teaming to identify vulnerabilities and recommend remediation steps.
– **Securing AI Assets:**
– Features Model Armor, which protects against threats such as prompt injection, data loss, and offensive content.
– Provides consistent protection across multiple clouds and models.
– Enables seamless integration into existing applications without altering app constructs through APIs.
– **Managing AI Threats:**
– Operationalizes intelligence from Google and Mandiant to defend AI workloads.
– Employs detectors to manage threats such as privilege escalation and foundational model hijacking.
– Fosters holistic AI security by addressing model integrity, compliance, and governance needs.
– **Integration with Security Command Center (SCC):**
– Offers a centralized view of cloud risks, including those specific to AI.
– Allows security teams to manage risks in context and enhance response strategies in real-time.
– **Regulatory Compliance Features:**
– Includes Assured Workloads for controlled environments with strict policy enforcement.
– Features like Confidential Computing to protect data throughout processing.
– **Additional Resources:**
– Mandiant AI Security Consulting Portfolio to evaluate and enhance AI system security measures.
– Tools to identify unsanctioned AI usage within organizations through products like Chrome Enterprise Premium.
– **Next Steps:**
– Encourage organizations to engage with Google Cloud for exploring AI Protection capabilities and participate in relevant security events.
This alignment of comprehensive security strategies with AI lifecycle management reflects the growing complexity and necessity of integrating advanced security practices in organizations leveraging AI technologies. Such initiatives not only mitigate risks but also strengthen overall security posture in an increasingly AI-driven landscape.