Tag: inference scaling

  • Simon Willison’s Weblog: QwQ-32B: Embracing the Power of Reinforcement Learning

    Source URL: https://simonwillison.net/2025/Mar/5/qwq-32b/#atom-everything Source: Simon Willison’s Weblog Title: QwQ-32B: Embracing the Power of Reinforcement Learning Feedly Summary: QwQ-32B: Embracing the Power of Reinforcement Learning New Apache 2 licensed reasoning model from Qwen: We are excited to introduce QwQ-32B, a model with 32 billion parameters that achieves performance comparable to DeepSeek-R1, which boasts 671 billion parameters…

  • Simon Willison’s Weblog: Claude 3.7 Sonnet and Claude Code

    Source URL: https://simonwillison.net/2025/Feb/24/claude-37-sonnet-and-claude-code/#atom-everything Source: Simon Willison’s Weblog Title: Claude 3.7 Sonnet and Claude Code Feedly Summary: Claude 3.7 Sonnet and Claude Code Anthropic released Claude 3.7 Sonnet today – skipping the name “Claude 3.6" because the Anthropic user community had already started using that as the unofficial name for their October update to 3.5 Sonnet.…

  • Hacker News: Explainer: What’s R1 and Everything Else?

    Source URL: https://timkellogg.me/blog/2025/01/25/r1 Source: Hacker News Title: Explainer: What’s R1 and Everything Else? Feedly Summary: Comments AI Summary and Description: Yes Summary: The text provides an informative overview of recent developments in AI, particularly focusing on Reasoning Models and their significance in the ongoing evolution of AI technologies. It discusses the releases of models such…

  • Hacker News: SOTA on swebench-verified: relearning the bitter lesson

    Source URL: https://aide.dev/blog/sota-bitter-lesson Source: Hacker News Title: SOTA on swebench-verified: relearning the bitter lesson Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses advancements in AI, particularly around leveraging large language models (LLMs) for software engineering challenges through novel approaches such as test-time inference scaling. It emphasizes the key insight that scaling…

  • Simon Willison’s Weblog: December in LLMs has been a lot

    Source URL: https://simonwillison.net/2024/Dec/20/december-in-llms-has-been-a-lot/#atom-everything Source: Simon Willison’s Weblog Title: December in LLMs has been a lot Feedly Summary: I had big plans for December: for one thing, I was hoping to get to an actual RC of Datasette 1.0, in preparation for a full release in January. Instead, I’ve found myself distracted by a constant barrage…

  • Simon Willison’s Weblog: Gemini 2.0 Flash "Thinking mode"

    Source URL: https://simonwillison.net/2024/Dec/19/gemini-thinking-mode/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.0 Flash "Thinking mode" Feedly Summary: Those new model releases just keep on flowing. Today it’s Google’s snappily named gemini-2.0-flash-thinking-exp, their first entrant into the o1-style inference scaling class of models. I posted about a great essay about the significance of these just this morning. From…

  • Simon Willison’s Weblog: Is AI progress slowing down?

    Source URL: https://simonwillison.net/2024/Dec/19/is-ai-progress-slowing-down/#atom-everything Source: Simon Willison’s Weblog Title: Is AI progress slowing down? Feedly Summary: Is AI progress slowing down? This piece by Arvind Narayanan and Sayash Kapoor is the single most insightful essay about AI and LLMs I’ve seen in a long time. It’s long and worth reading every inch of it – it…