Tag: bicycle

  • Simon Willison’s Weblog: Kimi-K2-Instruct-0905

    Source URL: https://simonwillison.net/2025/Sep/6/kimi-k2-instruct-0905/#atom-everything Source: Simon Willison’s Weblog Title: Kimi-K2-Instruct-0905 Feedly Summary: Kimi-K2-Instruct-0905 New not-quite-MIT licensed model from Chinese Moonshot AI, a follow-up to the highly regarded Kimi-K2 model they released in July. This one is an incremental improvement – I’ve seen it referred to online as “Kimi K-2.1". It scores a little higher on a…

  • Simon Willison’s Weblog: DeepSeek 3.1

    Source URL: https://simonwillison.net/2025/Aug/22/deepseek-31/#atom-everything Source: Simon Willison’s Weblog Title: DeepSeek 3.1 Feedly Summary: DeepSeek 3.1 The latest model from DeepSeek, a 685B monster (like DeepSeek v3 before it) but this time it’s a hybrid reasoning model. DeepSeek claim: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly. Drew Breunig points out that their benchmarks…

  • Simon Willison’s Weblog: Introducing Gemma 3 270M: The compact model for hyper-efficient AI

    Source URL: https://simonwillison.net/2025/Aug/14/gemma-3-270m/#atom-everything Source: Simon Willison’s Weblog Title: Introducing Gemma 3 270M: The compact model for hyper-efficient AI Feedly Summary: Introducing Gemma 3 270M: The compact model for hyper-efficient AI New from Google: Gemma 3 270M, a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring…

  • Simon Willison’s Weblog: Qwen3-4B-Thinking: "This is art – pelicans don’t ride bikes!"

    Source URL: https://simonwillison.net/2025/Aug/10/qwen3-4b/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-4B-Thinking: "This is art – pelicans don’t ride bikes!" Feedly Summary: I’ve fallen a few days behind keeping up with Qwen. They released two new 4B models last week: Qwen3-4B-Instruct-2507 and its thinking equivalent Qwen3-4B-Thinking-2507. These are relatively tiny models that punch way above their weight. I’ve…

  • Simon Willison’s Weblog: Previewing GPT-5 at OpenAI’s office

    Source URL: https://simonwillison.net/2025/Aug/7/previewing-gpt-5/#atom-everything Source: Simon Willison’s Weblog Title: Previewing GPT-5 at OpenAI’s office Feedly Summary: A couple of weeks ago I was invited to OpenAI’s headquarters for a “preview event", for which I had to sign both an NDA and a video release waiver. I suspected it might relate to either GPT-5 or the OpenAI…

  • Simon Willison’s Weblog: OpenAI’s new open weight (Apache 2) models are really good

    Source URL: https://simonwillison.net/2025/Aug/5/gpt-oss/ Source: Simon Willison’s Weblog Title: OpenAI’s new open weight (Apache 2) models are really good Feedly Summary: The long promised OpenAI open weight models are here, and they are very impressive. They’re available under proper open source licenses – Apache 2.0 – and come in two sizes, 120B and 20B. OpenAI’s own…

  • Simon Willison’s Weblog: Claude Opus 4.1

    Source URL: https://simonwillison.net/2025/Aug/5/claude-opus-41/ Source: Simon Willison’s Weblog Title: Claude Opus 4.1 Feedly Summary: Claude Opus 4.1 Surprise new model from Anthropic today – Claude Opus 4.1, which they describe as “a drop-in replacement for Opus 4". My favorite thing about this model is the version number – treating this as a .1 version increment looks…

  • Simon Willison’s Weblog: XBai o4

    Source URL: https://simonwillison.net/2025/Aug/3/xbai-o4/#atom-everything Source: Simon Willison’s Weblog Title: XBai o4 Feedly Summary: XBai o4 Yet another open source (Apache 2.0) LLM from a Chinese AI lab. This model card claims: XBai o4 excels in complex reasoning capabilities and has now completely surpassed OpenAI-o3-mini in Medium mode. This a 32.8 billion parameter model released by MetaStone…

  • Simon Willison’s Weblog: Trying out Qwen3 Coder Flash using LM Studio and Open WebUI and LLM

    Source URL: https://simonwillison.net/2025/Jul/31/qwen3-coder-flash/ Source: Simon Willison’s Weblog Title: Trying out Qwen3 Coder Flash using LM Studio and Open WebUI and LLM Feedly Summary: Qwen just released their sixth model(!) for this July called Qwen3-Coder-30B-A3B-Instruct – listed as Qwen3-Coder-Flash in their chat.qwen.ai interface. It’s 30.5B total parameters with 3.3B active at any one time. This means…