Simon Willison’s Weblog: Magistral — the first reasoning model by Mistral AI

Source URL: https://simonwillison.net/2025/Jun/10/magistral/
Source: Simon Willison’s Weblog
Title: Magistral — the first reasoning model by Mistral AI

Feedly Summary: Magistral — the first reasoning model by Mistral AI
Mistral’s first reasoning model is out today, in two sizes. There’s a 24B Apache 2 licensed open-weights model called Magistral Small (actually Magistral-Small-2506), and a larger API-only model called Magistral Medium.
Magistral Small is available as mistralai/Magistral-Small-2506 on Hugging Face. Mistral also released an official GGUF version, Magistral-Small-2506_gguf, which I ran successfully using Ollama like this:
ollama pull hf.co/mistralai/Magistral-Small-2506_gguf:Q8_0

That fetched a 25GB file. I ran prompts using a chat session with llm-ollama like this:
llm chat -m hf.co/mistralai/Magistral-Small-2506_gguf:Q8_0

Here’s what I got for “Generate an SVG of a pelican riding a bicycle" (transcript here):

One thing that caught my eye in the Magistral announcement:

Legal, finance, healthcare, and government professionals get traceable reasoning that meets compliance requirements. Every conclusion can be traced back through its logical steps, providing auditability for high-stakes environments with domain-specialized AI.

I guess this means the reasoning traces are fully visible and not redacted in any way – interesting to see Mistral trying to turn that into a feature that’s attractive to the business clients they are most interested in appealing to.
Also from that announcement:

Our early tests indicated that Magistral is an excellent creative companion. We highly recommend it for creative writing and storytelling, with the model capable of producing coherent or — if needed — delightfully eccentric copy.

I haven’t seen a reasoning model promoted for creative writing in this way before.
Tags: llm-release, mistral, llm, generative-ai, llm-reasoning, ai, llms, ollama, pelican-riding-a-bicycle

AI Summary and Description: Yes

Summary: Mistral AI has launched its first reasoning model, Magistral, available in two versions, emphasizing compliance and traceability in various professional fields. This innovative approach to AI highlights its application in creative writing, which is a unique selling point for the model.

Detailed Description: Mistral AI’s introduction of the Magistral reasoning model marks a significant development in the realm of Large Language Models (LLMs).

Key points include:

– **Model Variants**:
– A smaller, open-weights model called **Magistral Small (Magistral-Small-2506)**, available under an Apache 2 license on Hugging Face.
– A larger API-only model termed **Magistral Medium**.

– **Functionality and Compliance**:
– The model provides **traceable reasoning**, which allows professionals in legal, finance, healthcare, and government sectors to trace conclusions back through logical steps, thus enhancing **auditability** and compliance efforts in high-stakes environments.
– The emphasis on **full visibility** of reasoning traces without redaction positions Mistral as an appealing option for enterprises focused on accountability.

– **Creative Applications**:
– Notably, Mistral promotes the model as a potentially **excellent tool for creative writing and storytelling**, capable of producing both coherent and eccentric content. This is a distinctive feature not commonly emphasized in LLMs.

– **Technical Execution**:
– The model was successfully executed using Ollama, demonstrating practical applicability. An example of generating SVG graphics through prompts showcases the model’s versatility beyond traditional text generation.

The introduction of the Magistral reasoning model presents both practical applications in compliance-heavy domains and intriguing possibilities for creative industries, thereby broadening the operational scope of AI in professional settings. This development could serve as a significant point of interest for security and compliance professionals seeking to integrate advanced AI capabilities into their workflows.