Source URL: https://cloud.google.com/blog/products/ai-machine-learning/announcing-new-mistral-large-model-on-vertex-ai/
Source: Cloud Blog
Title: Announcing Mistral AI’s Large-Instruct-2411 on Vertex AI
Feedly Summary: In July, we announced the availability of Mistral AI’s models on Vertex AI: Codestral for code generation tasks, Mistral Large 2 for high-complexity tasks, and the lightweight Mistral Nemo for reasoning tasks like creative writing. Today, we’re announcing the availability of Mistral AI’s newest model on Vertex AI Model Garden: Mistral-Large-Instruct-2411 is now generally available
Large-Instruct-2411 is an advanced dense large language model (LLM) of 123B parameters with strong reasoning, knowledge and coding capabilities extending its predecessor with better long context, function calling and system prompt. The model is ideal for use cases that include complex agentic workflows with precise instruction following and JSON outputs, or large context applications requiring strong adherence for retrieval-augmented generation (RAG), and code generation.
You can access and deploy the new Mistral AI Large-Instruct-2411 model on Vertex AI through our Model-as-a-Service (MaaS) or self-service offering today.
aside_block
What can you do with the new Mistral AI models on Vertex AI?
By building with Mistral’s models on Vertex AI, you can:
Select the best model for your use case: Choose from a range of Mistral AI models, including efficient options for low-latency needs and powerful models for complex tasks like agentic workflows. Vertex AI makes it easy to evaluate and select the optimal model.
Experiment with confidence: Mistral AI models are available as fully managed Model-as-a-Service on Vertex AI. You can explore Mistral AI models through simple API calls and comprehensive side-by-side evaluations within our intuitive environment.
Manage models without overhead: Simplify how you deploy the new Mistral AI models at scale with fully managed infrastructure designed for AI workloads and the flexibility of pay-as-you-go pricing.
Tune the models to your needs: In the coming weeks, you will be able to fine-tune Mistral AI’s models to create bespoke solutions, with your unique data and domain knowledge.
Craft intelligent agents: Create and orchestrate agents powered by Mistral AI models, using Vertex AI’s comprehensive set of tools, including LangChain on Vertex AI. Integrate Mistral AI models into your production-ready AI experiences with Genkit’s Vertex AI plugin.
Build with enterprise-grade security and compliance: Leverage Google Cloud’s built-in security, privacy, and compliance measures. Enterprise controls, such as Vertex AI Model Garden’s new organization policy, provide the right access controls to make sure only approved models can be accessed.
Get started with Mistral AI models on Google Cloud
These additions continue Google Cloud’s commitment to open and flexible AI ecosystems that help you build solutions best-suited to your needs. Our collaboration with Mistral AI is a testament to our open approach, within a unified and an enterprise ready environment. Vertex AI provides a curated collection of first-party, open-source, and third-party models, many of which — including the new Mistral AI models — can be delivered as a fully-managed Model-as-a-service (MaaS) offering — providing you with the simplicity of a single bill and enterprise-grade security on our fully-managed infrastructure.
To start building with Mistral’s newest models, visit Model Garden and select the Mistral Large model tile. The models are also available on Google Cloud Marketplace here: Mistral Large.
You can check out our sample code and documentation to help you get started.
AI Summary and Description: Yes
**Summary:** The text describes the recent availability of new Mistral AI models on Google Cloud’s Vertex AI, marking an important development in AI capabilities and infrastructure support. The models are notable for their advanced reasoning, coding capabilities, and usability in diverse and complex applications, particularly due to their enterprise-grade security and flexible deployment options.
**Detailed Description:** The announcement highlights several key advancements and features related to Mistral AI’s models and their integration into Google Cloud’s platform:
– **Model Availability and Capabilities:**
– Introduction of Mistral AI’s Large-Instruct-2411 model, featuring 123 billion parameters.
– Enhanced reasoning, knowledge retention, and coding capabilities compared to its predecessors.
– New features such as improved handling of long context, function calling, and system prompts.
– **Use Cases:**
– Complex agentic workflows and precise instruction following.
– Large context applications that benefit from retrieval-augmented generation (RAG).
– Efficient code generation.
– **Deployment Options:**
– Access to Mistral AI models via Google Cloud’s Model-as-a-Service (MaaS) and through self-service options.
– Pay-as-you-go pricing and managed infrastructure designed specifically for AI workloads.
– **Support for Developers:**
– Users can choose from a range of models tailored for different needs, including low-latency and high-performance tasks.
– Tools for simple API calls and side-by-side model evaluations help in selecting the right model for specific applications.
– **Customization and Fine-Tuning:**
– Upcoming features will allow users to fine-tune models using their own data and domain-specific knowledge.
– Provision for creating intelligent agents leveraging the comprehensive tools available within Vertex AI.
– **Security and Compliance:**
– Built-in security, privacy, and compliance measures are emphasized, specifically tailored for enterprise use.
– The mention of enterprise controls and organization policies to restrict access to approved models increases the focus on security and compliance.
– **Collaborative Ecosystem:**
– Google Cloud’s commitment to fostering an open and flexible AI ecosystem is highlighted, enabling users to build tailored solutions.
This announcement not only showcases the advancements in AI modelling capabilities but also reinforces the importance of security, compliance, and ease of use in cloud environments for developers and enterprises alike. It underlines the growing relevance of infrastructure capable of supporting sophisticated AI applications while ensuring robust security controls are in place.