Tag: llm-pricing
-
Simon Willison’s Weblog: Qwen3-Coder: Agentic Coding in the World
Source URL: https://simonwillison.net/2025/Jul/22/qwen3-coder/ Source: Simon Willison’s Weblog Title: Qwen3-Coder: Agentic Coding in the World Feedly Summary: Qwen3-Coder: Agentic Coding in the World It turns out that as I was typing up my notes on Qwen3-235B-A22B-Instruct-2507 the Qwen team were unleashing something much bigger: Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder…
-
Simon Willison’s Weblog: Gemini 2.5 Flash-Lite is now stable and generally available
Source URL: https://simonwillison.net/2025/Jul/22/gemini-25-flash-lite/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.5 Flash-Lite is now stable and generally available Feedly Summary: Gemini 2.5 Flash-Lite is now stable and generally available The last remaining member of the Gemini 2.5 trio joins Pro and Flash in General Availability today. Gemini 2.5 Flash-Lite is the cheapest of the 2.5 family,…
-
Simon Willison’s Weblog: Grok 4
Source URL: https://simonwillison.net/2025/Jul/10/grok-4/#atom-everything Source: Simon Willison’s Weblog Title: Grok 4 Feedly Summary: Grok 4 Released last night, Grok 4 is now available via both API and a paid subscription for end-users. Key characteristics: image and text input, text output. 256,000 context length (twice that of Grok 3). It’s a reasoning model where you can’t see…
-
Simon Willison’s Weblog: Trying out the new Gemini 2.5 model family
Source URL: https://simonwillison.net/2025/Jun/17/gemini-2-5/ Source: Simon Willison’s Weblog Title: Trying out the new Gemini 2.5 model family Feedly Summary: After many months of previews, Gemini 2.5 Pro and Flash have reached general availability with new, memorable model IDs: gemini-2.5-pro and gemini-2.5-flash. They are joined by a new preview model with an unmemorable name: gemini-2.5-flash-lite-preview-06-17 is a…
-
Simon Willison’s Weblog: o3-pro
Source URL: https://simonwillison.net/2025/Jun/10/o3-pro/ Source: Simon Willison’s Weblog Title: o3-pro Feedly Summary: o3-pro OpenAI released o3-pro today, which they describe as a “version of o3 with more compute for better responses". It’s only available via the newer Responses API. I’ve added it to my llm-openai-plugin plugin which uses that new API, so you can try it…
-
Simon Willison’s Weblog: Updated Anthropic model comparison table
Source URL: https://simonwillison.net/2025/May/22/updated-anthropic-models/#atom-everything Source: Simon Willison’s Weblog Title: Updated Anthropic model comparison table Feedly Summary: Updated Anthropic model comparison table A few details in here about Claude 4 that I hadn’t spotted elsewhere: The training cut-off date for Claude Opus 4 and Claude Sonnet 4 is March 2025! That’s the most recent cut-off for any…
-
Simon Willison’s Weblog: Gemini 2.5: Our most intelligent models are getting even better
Source URL: https://simonwillison.net/2025/May/20/gemini-25/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.5: Our most intelligent models are getting even better Feedly Summary: Gemini 2.5: Our most intelligent models are getting even better A bunch of new Gemini 2.5 announcements at Google I/O today. 2.5 Flash and 2.5 Pro are both getting audio output (previously previewed in Gemini…
-
Simon Willison’s Weblog: Building software on top of Large Language Models
Source URL: https://simonwillison.net/2025/May/15/building-on-llms/#atom-everything Source: Simon Willison’s Weblog Title: Building software on top of Large Language Models Feedly Summary: I presented a three hour workshop at PyCon US yesterday titled Building software on top of Large Language Models. The goal of the workshop was to give participants everything they needed to get started writing code that…
-
Simon Willison’s Weblog: Gemini 2.5 Models now support implicit caching
Source URL: https://simonwillison.net/2025/May/9/gemini-implicit-caching/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.5 Models now support implicit caching Feedly Summary: Gemini 2.5 Models now support implicit caching I just spotted a cacheTokensDetails key in the token usage JSON while running a long chain of prompts against Gemini 2.5 Flash – despite not configuring caching myself: {“cachedContentTokenCount": 200658, "promptTokensDetails":…
-
Simon Willison’s Weblog: llm-gemini 0.19.1
Source URL: https://simonwillison.net/2025/May/8/llm-gemini-0191/#atom-everything Source: Simon Willison’s Weblog Title: llm-gemini 0.19.1 Feedly Summary: llm-gemini 0.19.1 Bugfix release for my llm-gemini plugin, which was recording the number of output tokens (needed to calculate the price of a response) incorrectly for the Gemini “thinking" models. Those models turn out to return candidatesTokenCount and thoughtsTokenCount as two separate values…