Tag: context length
-
Simon Willison’s Weblog: Claude Sonnet 4 now supports 1M tokens of context
Source URL: https://simonwillison.net/2025/Aug/12/claude-sonnet-4-1m/ Source: Simon Willison’s Weblog Title: Claude Sonnet 4 now supports 1M tokens of context Feedly Summary: Claude Sonnet 4 now supports 1M tokens of context Gemini and OpenAI both have million token models, so it’s good to see Anthropic catching up. This is 5x the previous 200,000 context length limit of the…
-
Simon Willison’s Weblog: Qwen3-4B Instruct and Thinking
Source URL: https://simonwillison.net/2025/Aug/6/qwen3-4b-instruct-and-thinking/ Source: Simon Willison’s Weblog Title: Qwen3-4B Instruct and Thinking Feedly Summary: Qwen3-4B Instruct and Thinking Yet another interesting model from Qwen—these are tiny compared to their other recent releases (just 4B parameters, 7.5GB on Hugging Face and even smaller when quantized) but with a 262,144 context length, which Qwen suggest is essential…
-
Simon Willison’s Weblog: GLM-4.5: Reasoning, Coding, and Agentic Abililties
Source URL: https://simonwillison.net/2025/Jul/28/glm-45/#atom-everything Source: Simon Willison’s Weblog Title: GLM-4.5: Reasoning, Coding, and Agentic Abililties Feedly Summary: GLM-4.5: Reasoning, Coding, and Agentic Abililties Another day, another significant new open weight model release from a Chinese frontier AI lab. This time it’s Z.ai – who rebranded (at least in English) from Zhipu AI a few months ago.…
-
Simon Willison’s Weblog: Qwen3-235B-A22B-Thinking-2507
Source URL: https://simonwillison.net/2025/Jul/25/qwen3-235b-a22b-thinking-2507/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-235B-A22B-Thinking-2507 Feedly Summary: Qwen3-235B-A22B-Thinking-2507 The third Qwen model release week, following Qwen3-235B-A22B-Instruct-2507 on Monday 21st and Qwen3-Coder-480B-A35B-Instruct on Tuesday 22nd. Those two were both non-reasoning models – a change from the previous models in the Qwen 3 family which combined reasoning and non-reasoning in the same model,…
-
Simon Willison’s Weblog: TimeScope: How Long Can Your Video Large Multimodal Model Go?
Source URL: https://simonwillison.net/2025/Jul/23/timescope/#atom-everything Source: Simon Willison’s Weblog Title: TimeScope: How Long Can Your Video Large Multimodal Model Go? Feedly Summary: TimeScope: How Long Can Your Video Large Multimodal Model Go? New open source benchmark for evaluating vision LLMs on how well they handle long videos: TimeScope probes the limits of long-video capabilities by inserting several…
-
Simon Willison’s Weblog: Qwen3-Coder: Agentic Coding in the World
Source URL: https://simonwillison.net/2025/Jul/22/qwen3-coder/ Source: Simon Willison’s Weblog Title: Qwen3-Coder: Agentic Coding in the World Feedly Summary: Qwen3-Coder: Agentic Coding in the World It turns out that as I was typing up my notes on Qwen3-235B-A22B-Instruct-2507 the Qwen team were unleashing something much bigger: Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder…
-
Simon Willison’s Weblog: Grok 4
Source URL: https://simonwillison.net/2025/Jul/10/grok-4/#atom-everything Source: Simon Willison’s Weblog Title: Grok 4 Feedly Summary: Grok 4 Released last night, Grok 4 is now available via both API and a paid subscription for end-users. Key characteristics: image and text input, text output. 256,000 context length (twice that of Grok 3). It’s a reasoning model where you can’t see…
-
Cloud Blog: How to use Gemini 2.5 to fine-tune video outputs on Vertex AI
Source URL: https://cloud.google.com/blog/products/ai-machine-learning/how-to-fine-tune-video-outputs-using-vertex-ai/ Source: Cloud Blog Title: How to use Gemini 2.5 to fine-tune video outputs on Vertex AI Feedly Summary: Recently, we announced Gemini 2.5 is generally available on Vertex AI. As part of this update, tuning capabilities have extended beyond text outputs – now, you can tune image, audio, and video outputs on…
-
Simon Willison’s Weblog: model.yaml
Source URL: https://simonwillison.net/2025/Jun/21/model-yaml/#atom-everything Source: Simon Willison’s Weblog Title: model.yaml Feedly Summary: model.yaml From their GitHub repo it looks like this effort quietly launched a couple of months ago, driven by the LM Studio team. Their goal is to specify an “open standard for defining crossplatform, composable AI models". A model can be defined using a…
-
Simon Willison’s Weblog: Quoting Workaccount2 on Hacker News
Source URL: https://simonwillison.net/2025/Jun/18/context-rot/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Workaccount2 on Hacker News Feedly Summary: They poison their own context. Maybe you can call it context rot, where as context grows and especially if it grows with lots of distractions and dead ends, the output quality falls off rapidly. Even with good context the rot…