Source URL: https://simonwillison.net/2025/Mar/10/llm-openrouter-04/
Source: Simon Willison’s Weblog
Title: llm-openrouter 0.4
Feedly Summary: llm-openrouter 0.4
I found out this morning that OpenRouter include support for a number of (rate-limited) free API models.
I occasionally workshops on top of LLMs (like this one) and being able to provide students with a quick way to obtain an API key against models where they don’t have to setup billing is really valuable to me!
This inspired me to upgrade my existing llm-openrouter plugin, and in doing so I closed out a bunch of open feature requests.
Consider this post the annotated release notes:
LLM schema support for OpenRouter models that support structured output. #23
I’m trying to get support for LLM’s new schema feature into as many plugins as possible.
OpenRouter’s OpenAI-compatible API includes support for the response_format structured content option, but with an important caveat: it only works for some models, and if you try to use it on others it is silently ignored.
I filed an issue with OpenRouter requesting they include schema support in their machine-readable model index. For the moment LLM will let you specify schemas for unsupported models and will ignore them entirely, which isn’t ideal.
llm openrouter key command displays information about your current API key. #24
Useful for debugging and checking the details of your key’s rate limit.
llm -m … -o online 1 enables web search grounding against any model, powered by Exa. #25
OpenRouter apparently make this feature available to every one of their supported models! They’re using new-to-me Exa to power this feature, an AI-focused search engine startup who appear to have built their own index with their own crawlers (according to their FAQ). This feature is currently priced by OpenRouter at $4 per 1000 results, and since 5 results are returned for every prompt that’s 2 cents per prompt.
llm openrouter models command for listing details of the OpenRouter models, including a –json option to get JSON and a –free option to filter for just the free models. #26
This offers a neat way to list the available models. There are examples of the output in the comments on the issue.
New option to specify custom provider routing: -o provider ‘{JSON here}. #17
Part of OpenRouter’s USP is that it can route prompts to different providers depending on factors like latency, cost or as a fallback if your first choice is unavailable – great for if you are using open weight models like Llama which are hosted by competing companies.
The options they provide for routing are very thorough – I had initially hoped to provide a set of CLI options that covered all of these bases, but I decided instead to reuse their JSON format and forward those options directly on to the model.
Tags: llm, projects, plugins, annotated-release-notes, generative-ai, ai, llms
AI Summary and Description: Yes
Summary: The text discusses enhancements to the OpenRouter API for LLMs, highlighting features such as schema support, API key management, web search grounding, and model routing options. It is particularly relevant for professionals in AI and generative AI security as it addresses potential concerns regarding API management and performance metrics.
Detailed Description: The text elaborates on the recent updates made to the OpenRouter API, focusing on how these improvements can aid developers and educators in integrating LLMs into their applications more seamlessly. Below are the major points of interest:
– **LLM Schema Support**: The author is working to integrate schema support into OpenRouter models that allow structured output. This feature aids in organizing responses, though it comes with limitations for certain models.
– **API Key Management**: The `llm openrouter key` command provides users with critical information about their current API key, which is beneficial for debugging and understanding usage limits.
– **Web Search Grounding**: A new command enables web search capabilities against any supported model, powered by Exa, an AI search engine startup. This enhances the ability to provide more relevant responses by grounding them in current data, though at a cost.
– **Model Listing Command**: The `llm openrouter models` command allows users to list available models with options to filter by free models, enhancing accessibility for those on a budget.
– **Custom Provider Routing**: The feature for specifying custom provider routing within OpenRouter is a major advantage, enabling the prompt routing to different models based on latency and cost. This flexibility is valuable for users deploying open-weight models like Llama across various platforms.
Overall, these enhancements reflect OpenRouter’s collaborative ecosystem and dedication to improving user experience, especially for those developing applications around generative AI. The insights derived from the text can guide security and compliance professionals to understand how API integrations and model management could impact the security posture of their applications.