Source URL: https://cloud.google.com/blog/products/business-intelligence/understanding-looker-conversational-analytics-security/
Source: Cloud Blog
Title: Chat with confidence: Unpacking security in Looker Conversational Analytics
Feedly Summary: The landscape of business intelligence is evolving rapidly, with users expecting greater self-service and natural language capabilities, powered by AI. Looker’s Conversational Analytics empowers everyone in your organization to access the wealth of information within your data. Select the data you wish to explore, ask questions in natural language, as you would a colleague, and quickly receive insightful answers and visualizations that are grounded in truth, thanks to Looker’s semantic layer. This intuitive approach lowers the technical barriers to data analysis and fosters a more data-driven culture across your teams.
How does this work? At its core, Conversational Analytics understands the intent behind your questions. Enhanced by Gemini models, the process involves interpreting your natural language query, generating the appropriate data retrieval logic, and presenting the results in an easy-to-understand format, often as a visualization. This process benefits from Looker’s semantic model, which simplifies complex data with predefined business metrics, so that Gemini’s AI is grounded in a reliable and consistent understanding of your data.
Prioritizing privacy in Gemini
The rise of powerful generative AI models like Gemini brings incredible opportunities for innovation and efficiency. But you need a responsible and secure approach to data. When users use AI tools, questions about data privacy and security are top of mind. How are prompts and data used? Are they stored? Are they used to train the model?
At Google Cloud, the privacy of your data is a fundamental priority when you use Gemini models, and we designed our data governance practices to give you control and peace of mind. Specifically:
Your prompts and outputs are safe
Google Cloud does not train models on your prompts or your company’s data.
Conversational Analytics only uses your data to answer your business questions — making data queries, creating charts and summarizations, and providing answers. We store agent metadata, such as special instructions, to improve the quality of the agent’s answers, and so you can use the same agent in multiple chat sessions. We also store chat conversations so you can pick up where you left off. Both are protected via IAM and not shared with anyone outside your organization without permission.
aside_block
Your data isn’t used to train our models
The data processing workflow in Conversational Analytics involves multiple steps:
The agent sees the user’s question, identifies the specific context needed to answer it, and uses tools to retrieve useful context like sample data and column descriptions.
Using business and data context and the user question, the agent generates and executes a query to retrieve the data. The data is returned, and the resulting data table is generated.
Previously gathered information can then be used to create visualizations, text explanations, or suggested follow-up questions. Through this process, the system keeps track of the user’s question, the data samples, and the query results, to help formulate a final answer.
When the user asks a new question, such as a follow-up, the previous context of the conversation helps the agent understand the user’s new intent.
Enhancing trust and security in Conversational Analytics
To give you the confidence to rely on Conversational Analytics, we follow a comprehensive set of best practices within Google Cloud:
Leverage Looker’s semantic layer: By grounding Conversational Analytics in Looker’s semantic model, we ensure that the AI model operates on a trusted and consistent understanding of your business metrics. This not only improves the accuracy of insights but also leverages Looker’s established governance framework.
Secure data connectivity: Conversational Analytics connects to Google Cloud services like BigQuery, which have their own robust security measures and access controls. This helps ensure that your underlying data remains protected.
Use data encryption: Data transmitted to Gemini for processing is encrypted in-transit, safeguarding it from unauthorized access. Agent metadata and conversation history are also encrypted.
Continuously monitor and improve: Our teams continuously monitor the performance and security of Conversational Analytics and Gemini in Google Cloud.
Role-based access controls
In addition, Looker provides a robustrole-based access control (RBAC) framework that Conversational Analytics leverages to offer granular control over who can interact with specific data. When a Looker user initiates a chat with data, Looker Conversational Analytics respects their established Looker permissions. This means they can only converse with Looker Explores to which they already have access. For instance, while the user might have permission to view two Looker Explores, an administrator could restrict conversational capabilities to only one. As conversational agents become more prevalent, the user will only be able to use those agents to which they have been granted access. Agent creators also have the ability to configure the capabilities of Conversational Analytics agents, for example limiting the user to viewing charts while restricting advanced functionalities like forecasting.
Innovate with confidence
We designed Gemini to be a powerful partner for your business, helping you create, analyze, and automate with Google’s most capable AI models. We’re committed to providing you this capability without compromising your data’s security or privacy, or training on your prompts or data. By not storing your prompts, data, and model outputs or using them for training, you can leverage the full potential of generative AI with confidence, knowing your data remains under your control.
By following these security principles and leveraging Google Cloud’s robust infrastructure, Conversational Analytics offers a powerful, insightful experience that is also secure and trustworthy. By making data insights accessible to everyone securely, you can unlock new levels of innovation and productivity in your organization. Enable Conversational Analytics in Looker today, and start chatting with your data with confidence.
AI Summary and Description: Yes
Summary: The text discusses the advancements in business intelligence through the use of AI and Google Cloud’s Conversational Analytics, powered by the Gemini model. It emphasizes data privacy and security, detailing how sensitive information is handled and protected during user interactions.
Detailed Description:
The provided text highlights the integration of AI into business intelligence via Google’s Looker platform, particularly focusing on the Conversational Analytics feature and the Gemini model. Below are the key points:
– **Conversational Analytics Features**:
– Users can interact with data using natural language, significantly lowering the technical barriers to data analysis.
– The platform leverages a semantic layer to ensure that data insights are accurate and reliable.
– Users can ask specific queries and receive insightful visualizations quickly.
– **Data Privacy and Security**:
– Google Cloud prioritizes data privacy, asserting that prompts and data from users are not stored or used to train AI models.
– Only relevant agent metadata is stored to enhance user experience and facilitate ongoing conversations.
– **Data Handling Workflow**:
– The process described involves identifying user intent, generating queries for data retrieval, and creating visualizations based on previous interactions.
– The framework preserves context throughout the conversations to assist users with follow-up queries.
– **Security Measures**:
– Implementation of role-based access controls (RBAC) to ensure users only access data align with their permissions.
– End-to-end encryption of data during transit protects against unauthorized access.
– Regular performance and security monitoring is conducted to enhance trustworthiness.
– **Innovation and Trust**:
– Google Cloud is committed to allowing users to exploit generative AI capabilities without risking data exposure.
– The design policies ensure that users can rely on the platform for secure analysis and decision-making.
This text is particularly significant for professionals in AI, cloud computing, and security as it provides insights into responsible AI deployment and governance in data analytics while highlighting measures that can reinforce user trust in AI systems.