Simon Willison’s Weblog: llm-ollama 0.9.0

Source URL: https://simonwillison.net/2025/Mar/4/llm-ollama-090/
Source: Simon Willison’s Weblog
Title: llm-ollama 0.9.0

Feedly Summary: llm-ollama 0.9.0
This release of the llm-ollama plugin adds support for schemas, thanks to a PR by Adam Compton.
Ollama provides very robust support for this pattern thanks to their structured outputs feature, which works across all of the models that they support by intercepting the logic that outputs the next token and restricting it to only tokens that would be valid in the context of the provided schema.
With Ollama and llm-ollama installed you can run even run structured schemas against vision prompts for local models. Here’s one against Ollama’s llama3.2-vision:
llm -m llama3.2-vision:latest \
‘describe images’ \
–schema ‘species,description,count int’ \
-a https://static.simonwillison.net/static/2025/two-pelicans.jpg

I got back this:
{
“species": "Pelicans",
"description": "The image features a striking brown pelican with its distinctive orange beak, characterized by its large size and impressive wingspan.",
"count": 1
}

(Actually a bit disappointing, as there are two pelicans and their beaks are brown.)
Tags: llm, ollama, plugins, generative-ai, ai, llms, llama, vision-llms

AI Summary and Description: Yes

Summary: The release of the llm-ollama plugin enhances support for structured output schemas in AI models, particularly in the context of generative AI and vision tasks. Professionals in AI and cloud security can leverage this functionality for improving data handling and structured interactions with models.

Detailed Description: The text discusses the release of the llm-ollama 0.9.0 plugin, highlighting its integration of schema support. The significance of this development relates closely to various facets of AI and generative AI applications. Here are the major points:

– **Support for Schemas**: The plugin now accommodates the use of schemas, which simplifies the process of defining and managing the expected outputs of AI models.
– **Structured Outputs**: It utilizes Ollama’s structured outputs feature, which constrains the model’s response to only those outputs that are valid per the defined schema. This is particularly beneficial for enhancing data integrity and consistency.
– **Compatibility with Vision Models**: The plugin can execute structured schemas against vision prompts, expanding its applicability to multimodal model scenarios where text and image interactions are necessary.
– **Example Provided**: An example illustrates the plugin’s functionality using an LLM to describe an image of pelicans, demonstrating both the utility and limitations of the schema application.

### Practical Implications for Security and Compliance Professionals:
– **Data Integrity**: The use of schemas helps ensure that generated outputs meet predefined standards, which can improve compliance with data handling regulations.
– **Improved Model Interactions**: The ability to structure outputs enhances the models’ accountability and minimizes risks associated with unpredictable AI responses.
– **Potential Applications**: By leveraging this technology, organizations can improve the security and compliance posture around AI integrations in various sectors, including health, finance, and more.

As organizations increasingly adopt AI models, enhancements like these are crucial for aligning with security standards and regulatory compliance.