Source URL: https://simonwillison.net/2025/Jan/23/llm-020/#atom-everything
Source: Simon Willison’s Weblog
Title: LLM 0.20
Feedly Summary: LLM 0.20
New release of my LLM CLI tool and Python library. A bunch of accumulated fixes and features since the start of December, most notably:
Support for OpenAI’s o1 model – a significant upgrade from o1-preview given its 200,000 input and 100,000 output tokens (o1-preview was 128,000/32,768). #676
Support for the gpt-4o-audio-preview model, which can accept audio input: llm -m gpt-4o-audio-preview -a https://static.simonwillison.net/static/2024/pelican-joke-request.mp3 #677
A new llm -x/–extract option which extracts and returns the contents of the first fenced code block in the response. This is useful for prompts that generate code. #681
A new llm models -q ‘search’ option for searching available models – useful if you’ve installed a lot of plugins. Searches are case insensitive. #700
Tags: llm, projects, generative-ai, annotated-release-notes, ai, llms, openai, o1
AI Summary and Description: Yes
Summary: The text describes a new release of a CLI tool and Python library for Large Language Models (LLMs), highlighting upgrades in model support and usability features. This release is significant for AI professionals, particularly those involved in development and deployment of generative AI applications.
Detailed Description:
The release notes for the LLM CLI tool and Python library detail several important updates that enhance the tool’s functionality, specifically in relation to the latest advancements in Large Language Models (LLMs). These updates offer noteworthy improvements in model capabilities and usability for developers and data scientists working with AI technologies.
Key updates include:
– **Support for OpenAI’s o1 model**:
– This model boasts an impressive capacity of 200,000 input and 100,000 output tokens, which is a substantial upgrade over its predecessor, the o1-preview model, which allowed 128,000/32,768 tokens. This expansion is particularly significant for applications requiring extensive context.
– **New gpt-4o-audio-preview model**:
– The tool now accommodates this model, enabling it to accept audio input, which broadens its usability in voice recognition and audio processing applications. This feature exemplifies the ongoing evolution in multimodal AI models.
– **New extraction option**:
– The `llm -x/–extract` option has been introduced, which extracts the first fenced code block from generated responses. This is particularly valuable for developers generating or interacting with code through AI, enhancing productivity and streamlining workflows.
– **Search capability for models**:
– The new `llm models -q ‘search’` option can search through installed models, accommodating users with multiple plugins. An enhanced search functionality allows for better navigation and usability within the CLI interface.
These updates illustrate the tool’s responsiveness to community needs and the continual enhancement of LLM capabilities, making it relevant for professionals engaged in AI projects and the deployment of generative AI technologies. The innovations provided in this release could lead to more efficient workflows and improved outcomes in various AI applications.