Simon Willison’s Weblog: Usage charts for my LLM tool against OpenRouter

Source URL: https://simonwillison.net/2025/Aug/4/llm-openrouter-usage/#atom-everything
Source: Simon Willison’s Weblog
Title: Usage charts for my LLM tool against OpenRouter

Feedly Summary: Usage charts for my LLM tool against OpenRouter
OpenRouter proxies requests to a large number of different LLMs and provides high level statistics of which models are the most popular among their users.
Tools that call OpenRouter can include HTTP-Referer and X-Title headers to credit that tool with the token usage. My llm-openrouter plugin does that here.
… which means this page displays aggregate stats across users of that plugin! Looks like someone has been running a lot of traffic through Qwen 3 14B recently.

Tags: ai, generative-ai, llms, llm, openrouter

AI Summary and Description: Yes

Summary: The text discusses the usage of a particular LLM tool called OpenRouter, which aggregates and displays statistics about the usage of various language models. It highlights user engagement with multiple models and suggests specific tools are being credited for token usage, emphasizing the utility of LLMs in tracking performance metrics.

Detailed Description: The content primarily relates to the category of “LLM Security” and “AI.” It offers insights into the monitoring and analysis of usage data for various language models through the OpenRouter platform. Here are the major points highlighted in the text:

– **OpenRouter Functionality**: Acts as a proxy for requests directed to multiple Language Learning Models (LLMs), showcasing which models are most favored by users.
– **User Engagement**: The usage charts indicate the level of activity for the models, hinting at trends within the AI landscape, particularly the popularity of specific models like Qwen 3 14B.
– **Token Usage Tracking**: Tools utilizing OpenRouter can include specific headers (HTTP-Referer and X-Title) to provide credit for token usage to the applications they are integrating with, creating an ecosystem of accountability and performance insights.
– **Aggregate Statistics**: The emphasis on aggregate statistics allows developers and researchers to gauge the popularity and usage of different LLMs effectively, which is crucial for both performance assessment and potential security considerations.

Practical Implications:
– For **security professionals**, understanding how token usage is tracked can inform about potential vulnerabilities in APIs that interact with LLMs and the importance of secure data handling in user credits.
– For **developers**, knowing the trends in LLM usage can guide decisions on which models to implement based on user preference and performance metrics.
– The integration of headers for tracking purposes raises considerations about **privacy** and the need for responsible data collection practices.

Overall, the information presented serves as a valuable resource for professionals managing LLM tools, highlighting usage patterns and the implications of those patterns in the realms of security and compliance.