Simon Willison’s Weblog: Mistral Small 3.1 on Ollama

Source URL: https://simonwillison.net/2025/Apr/8/mistral-small-31-on-ollama/#atom-everything
Source: Simon Willison’s Weblog
Title: Mistral Small 3.1 on Ollama

Feedly Summary: Mistral Small 3.1 on Ollama
Mistral Small 3.1 (previously) is now available through Ollama, providing an easy way to run this multi-modal (vision) model on a Mac (and other platforms, though I haven’t tried them myself yet).
I had to upgrade Ollama to the most recent version to get it to work – prior to that I got a Error: unable to load model message. Upgrades can be accessed through the Ollama macOS system tray icon.
I fetched the 15GB model by running:
ollama pull mistral-small3.1

Then used llm-ollama to run prompts through it, including one to describe this image:
llm install llm-ollama
llm -m mistral-small3.1 ‘describe this image’ -a https://static.simonwillison.net/static/2025/Mpaboundrycdfw-1.png

Here’s the output. It’s good, though not quite as impressive as the description I got from the slightly larger Qwen2.5-VL-32B.
Tags: vision-llms, mistral, llm, ollama, generative-ai, ai, llms

AI Summary and Description: Yes

Summary: The text discusses the installation and usage of the Mistral Small 3.1 model on the Ollama platform, highlighting its multi-modal capabilities and some challenges users might face. This information can be particularly relevant for professionals involved in AI, especially in the context of working with large language models (LLMs) and generative AI.

Detailed Description: The provided text gives an overview of the Mistral Small 3.1 model, which is a multi-modal AI designed to process both visual and textual inputs. The model is accessible through the Ollama platform, offering practitioners a convenient way to experiment with AI functionalities on personal computers, particularly Macs. It outlines the installation process, highlights specific user experiences, and provides insights into performance comparisons.

– **Model Overview**:
– Mistral Small 3.1 is a multi-modal model capable of handling vision tasks in addition to language processing.

– **Platform**:
– The model is integrated with Ollama, a platform that simplifies the deployment and execution of AI models on desktop systems.

– **Installation Process**:
– Users must upgrade to the latest version of Ollama for optimal compatibility.
– The model can be fetched by running a specific command (`ollama pull mistral-small3.1`).

– **Execution and Performance**:
– Users can prompt the model using the command line, providing a link to an image.
– The output quality is described as satisfactory but less impressive compared to a similar larger model, Qwen2.5-VL-32B.

– **Implications for AI Professionals**:
– This text provides insights into practical deployment through Ollama, useful for developers and data scientists exploring LLMs and generative AI applications.
– By sharing a direct comparison between two models, it highlights the evolving landscape in generative AI and emphasizes the importance of evaluating performance based on user requirements.

Overall, this information can help security and compliance professionals in AI as they assess the implications of integrating multi-modal AI solutions into their existing workflows, ensuring they understand both the capabilities and potential limitations of such models.