Tag: local models
-
Simon Willison’s Weblog: Mistral-Small 3.2
Source URL: https://simonwillison.net/2025/Jun/20/mistral-small-32/ Source: Simon Willison’s Weblog Title: Mistral-Small 3.2 Feedly Summary: Mistral-Small 3.2 Released on Hugging Face a couple of hours ago, so far there aren’t any quantizations to run it on a Mac but I’m sure those will emerge pretty quickly. This is a minor bump to Mistral Small 3.1, one of my…
-
Simon Willison’s Weblog: The last year six months in LLMs, illustrated by pelicans on bicycles
Source URL: https://simonwillison.net/2025/Jun/6/six-months-in-llms/#atom-everything Source: Simon Willison’s Weblog Title: The last year six months in LLMs, illustrated by pelicans on bicycles Feedly Summary: I presented an invited keynote at the AI Engineer World’s Fair in San Francisco this week. This is my third time speaking at the event – here’s my talks from October 2023 and…
-
Simon Willison’s Weblog: Run Your Own AI
Source URL: https://simonwillison.net/2025/Jun/3/run-your-own-ai/ Source: Simon Willison’s Weblog Title: Run Your Own AI Feedly Summary: Run Your Own AI Anthony Lewis published this neat, concise tutorial on using my LLM tool to run local models on your own machine, using llm-mlx. An under-appreciated way to contribute to open source projects is to publish unofficial guides like…
-
Simon Willison’s Weblog: llm-tools-exa
Source URL: https://simonwillison.net/2025/May/29/llm-tools-exa/ Source: Simon Willison’s Weblog Title: llm-tools-exa Feedly Summary: llm-tools-exa When I shipped LLM 0.26 yesterday one of the things I was most excited about was seeing what new tool plugins people would build for it. Dan Turkel’s llm-tools-exa is one of the first. It adds web search to LLM using Exa (previously),…
-
Simon Willison’s Weblog: llm-llama-server 0.2
Source URL: https://simonwillison.net/2025/May/28/llama-server-tools/ Source: Simon Willison’s Weblog Title: llm-llama-server 0.2 Feedly Summary: llm-llama-server 0.2 Here’s a second option for using LLM’s new tool support against local models (the first was via llm-ollama). It turns out the llama.cpp ecosystem has pretty robust OpenAI-compatible tool support already, so my llm-llama-server plugin only needed a quick upgrade to…
-
Simon Willison’s Weblog: Large Language Models can run tools in your terminal with LLM 0.26
Source URL: https://simonwillison.net/2025/May/27/llm-tools/ Source: Simon Willison’s Weblog Title: Large Language Models can run tools in your terminal with LLM 0.26 Feedly Summary: LLM 0.26 is out with the biggest new feature since I started the project: support for tools. You can now use the LLM CLI tool – and Python library – to grant LLMs…
-
Simon Willison’s Weblog: Building software on top of Large Language Models
Source URL: https://simonwillison.net/2025/May/15/building-on-llms/#atom-everything Source: Simon Willison’s Weblog Title: Building software on top of Large Language Models Feedly Summary: I presented a three hour workshop at PyCon US yesterday titled Building software on top of Large Language Models. The goal of the workshop was to give participants everything they needed to get started writing code that…
-
Docker: Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience
Source URL: https://www.docker.com/blog/run-gemma-3-locally-with-docker-model-runner/ Source: Docker Title: Run Gemma 3 with Docker Model Runner: Fully Local GenAI Developer Experience Feedly Summary: Explore how to run Gemma 3 models locally using Docker Model Runner, alongside a Comment Processing System as a practical case study. AI Summary and Description: Yes Summary: The text discusses the growing importance of…
-
Simon Willison’s Weblog: Long context support in LLM 0.24 using fragments and template plugins
Source URL: https://simonwillison.net/2025/Apr/7/long-context-llm/#atom-everything Source: Simon Willison’s Weblog Title: Long context support in LLM 0.24 using fragments and template plugins Feedly Summary: LLM 0.24 is now available with new features to help take advantage of the increasingly long input context supported by modern LLMs. (LLM is my command-line tool and Python library for interacting with LLMs,…
-
Simon Willison’s Weblog: smartfunc
Source URL: https://simonwillison.net/2025/Apr/3/smartfunc/ Source: Simon Willison’s Weblog Title: smartfunc Feedly Summary: smartfunc Vincent D. Warmerdam built this ingenious wrapper around my LLM Python library which lets you build LLM wrapper functions using a decorator and a docstring: from smartfunc import backend @backend(“gpt-4o") def generate_summary(text: str): """Generate a summary of the following text: """ pass summary…