Cloud Blog: Audit smarter: Introducing Google Cloud’s Recommended AI Controls framework

Source URL: https://cloud.google.com/blog/products/identity-security/audit-smarter-introducing-our-recommended-ai-controls-framework/
Source: Cloud Blog
Title: Audit smarter: Introducing Google Cloud’s Recommended AI Controls framework

Feedly Summary: As organizations build new generative AI applications and AI agents to automate business workflows, security and risk management management leaders face a new set of governance challenges. The complex, often opaque nature of AI models and agents, coupled with their reliance on vast datasets and potential for autonomous action, creates an urgent need to apply better governance, risk, and compliance (GRC) controls. 
Today’s standard compliance practices struggle to keep pace with AI, and leave critical questions unanswered. These include:

How do we prove our AI systems operate in line with internal policies and evolving regulations?

How can we verify that data access controls are consistently enforced across the entire AI lifecycle, from training to inference to large scale production?

What is the mechanism for demonstrating the integrity of our models and the sensitive data they handle?

We need more than manual checks to answer these questions, which is why Google Cloud has developed an automated approach that is scalable and evidence-based: the Recommended AI Controls framework, available now as a standalone service and as part of Security Command Center.
Developed by Google Cloud Security experts and validated by our Office of the CISO, this prebuilt framework incorporates best practices for securing AI systems, and uses industry standards including the NIST AI Risk Management Framework and the Cyber Risk Institute (CRI) profile as baselines. Our framework provides a direct path for organizations to assess, monitor, and audit the cloud native security and compliance posture of their generative AI workloads on Google Cloud.

aside_block
), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>

The challenge of auditing a modern AI workload
A typical generative AI workload is a complex ecosystem. It integrates AI-specific platforms like Vertex AI with foundational platform services that include Cloud Storage, Identity and Access Management (IAM), Secret Manager, Cloud Logging, and VPC Networks. 
Google Cloud’s AI Protection provides full lifecycle safety and security capabilities for AI workloads from development and training to runtime and large scale production. In addition, it is paramount to not only secure AI workloads, but to also audit whether they adhere to compliance, and ensure we are able to define controls for AI assets and monitor drift. Google Cloud has taken a more holistic approach to define best practices for platform components. 
Below is an example of AI workload:

Foundation components of AI workloads.

How the Recommended AI Controls Framework can help audit AI workloads
Audit Manager helps you identify compliance issues earlier in your AI compliance and audit process, integrating it directly into your operational workflows. Here’s how you can move from manual checklists to automated assurance for your generative AI workloads:

Establish your security controls baseline. Audit Manager provides a baseline to audit your generative AI workloads. These baselines are based on industry best practices and frameworks to help give you a clear, traceable directive for your audit.

Understand control responsibilities. Aligned with Google’s shared fate approach, the framework can help you understand the responsibility for each control — what you manage versus what the cloud platform provides — so you can focus your efforts effectively.

Run the audit with automated evidence collection. Evaluate your generative AI workloads against industry-standard technical controls in a simple, automated manner. Audit Manager can reduce manual audit preparation by automatically collecting evidence relative to the defined controls for your Vertex AI usage and supporting services.

Assess findings and remediate. The audit report will highlight control violations and deviations from recommended best practices. This can help your teams perform timely remediation before minor issues escalate into significant risks.

Create and share reports. Generate and share comprehensive, evidence-backed reports with a single click, which can support continuous compliance monitoring efforts with internal stakeholders and external auditors.

Enable continuous monitoring. Move beyond point-in-time snapshots. Establish a consistent methodology for ongoing compliance by scheduling regular assessments. This allows you to continuously monitor AI model usage, permissions, and configurations against best practices, and can help maintain a strong GRC posture over time.

Inside the Recommended AI Controls framework
The framework provides controls specifically designed for generative AI workloads, mapped across critical security domains. Crucially, these high-level principles are backed by auditable, technical checks linked directly to data sources from Vertex AI and its supporting Google Cloud services.
Here are a few examples of the controls included:

Access control:

Disable automatic IAM grants for default service accounts: This control restricts default service accounts with excessive permissions.

Disable root access on new Vertex AI Workbench user-managed notebooks and instances: This boolean constraint, when enforced, prevents newly created Vertex AI Workbench user-managed notebooks and instances from enabling root access. By default root access is enabled.

Data controls:

Customer Managed Encryption Keys (CMEK): Google Cloud offers organization policy constraints to help ensure CMEK usage across an organization. Using Cloud KMS CMEK gives you ownership and control of the keys that protect your data at rest in Google Cloud.

Configure data access control lists: You can customize these lists based on a user’s need to know. Apply data access control lists, also known as access permissions, to local and remote file systems, databases, and applications.

System and information integrity:

Vulnerability scanning: Our Artifact Analysis service scans for vulnerabilities in images and packages in Artifact Registry.

Audit and accountability:

Audit and accountability policy and procedures requirements: Google Cloud services write audit log entries to track who did what, where, and when with Google Cloud resources.

Configuration management:

Restrict resource service usage: This constraint ensures only customer-approved Google Cloud services are used in the right places. For example, production and highly sensitive folders have a list of Google Cloud services approved to store data. The sandbox folder may have a more permissive list of services, with accompanying data security controls to prevent data exfiltration in the event of a breach.

How to automate your AI audit in three steps
Security and compliance teams can immediately use this framework to move from manual checklists to automated, continuous assurance.

Select the framework: In the Google Cloud console, navigate to Audit Manager and select Google Recommended AI Controls framework from the library.

Define the scope: Specify the Google Cloud projects, folders, or organization where your generative AI workloads are deployed. Audit Manager automatically understands the relevant resources within that scope.

Run the assessment: Initiate an audit. Audit Manager collects evidence from the relevant services (including Vertex AI, IAM, and Cloud Storage) against the controls. The result is a detailed report showing your compliance status for each control, complete with direct links to the collected evidence.

Automate your AI assurance today
You can access the Audit Manager directly from your Google Cloud console. Navigate to the Compliance tab in your Google Cloud console, and select Audit Manager. For a comprehensive guide on using Audit Manager, please refer to our detailed product documentation. 
We encourage you to share your feedback on this service to help us improve Audit Manager’s user experience.

AI Summary and Description: Yes

Summary: The text discusses the governance challenges organizations face while implementing generative AI applications, emphasizing the need for improved governance, risk, and compliance (GRC) frameworks. It highlights Google Cloud’s automated Recommended AI Controls framework, which aids organizations in ensuring compliance, secure practices, and continuous monitoring of AI workloads.

Detailed Description: The text presents several critical themes related to AI security and compliance, particularly in the context of generative AI workloads. Here are the key points:

– **Governance Challenges**: Organizations building generative AI applications encounter new governance and compliance challenges due to the complex nature of AI systems and their reliance on vast datasets.

– **Need for Enhanced GRC Controls**: Traditional compliance practices are inadequate for the fast-evolving AI landscape, necessitating a robust mechanism for proving adherence to internal policies and regulations.

– **Google Cloud’s Recommended AI Controls Framework**:
– Developed by Google Security experts, this framework provides a scalable, automated solution for organizations to assess and ensure compliance over the AI lifecycle.
– It incorporates industry standards like the NIST AI Risk Management Framework and the Cyber Risk Institute profile, facilitating organizations in monitoring the security posture of their generative AI workloads.

– **Key Features of the Framework**:
– **Audit Manager Integration**: An early detection tool for compliance issues integrated into operational workflows.
– **Security Control Baseline**: Establishes a standard for auditing generative AI workloads.
– **Defined Control Responsibilities**: Clarifies which controls are managed by organizations versus those provided by Google Cloud.
– **Automated Evidence Collection**: Simplifies the audit process by automatically gathering evidence for compliance checks.
– **Comprehensive Reporting**: Generates reports that help in continuous compliance monitoring and remediation of identified issues.
– **Continuous Monitoring**: Allows organizations to establish ongoing assessments to maintain compliance over time.

– **Control Types Included**: The framework includes specific controls for access management, data security, system integrity, and configuration management, such as:
– Disabling excessive permissions for default service accounts.
– Implementing Customer Managed Encryption Keys (CMEK) for data protection.
– Establishing policies for tracking actions taken with Google Cloud resources.

– **Automation Steps for AI Audit**:
– **Selecting the Framework**: Access the Audit Manager in the Google Cloud console.
– **Defining Scope**: Specify the Google Cloud environments where generative AI workloads are deployed.
– **Running the Assessment**: Audit Manager conducts evaluations and provides a comprehensive report on compliance status and evidence links.

This detailed approach enables organizations to transition towards a more secure and compliant framework for handling generative AI workloads, responding to modern governance needs effectively. The information is particularly relevant to professionals involved in AI-related governance, risk management, and compliance, making it a significant development in the landscape of AI security.