Tag: reasoning mode

  • Simon Willison’s Weblog: Qwen3-235B-A22B-Thinking-2507

    Source URL: https://simonwillison.net/2025/Jul/25/qwen3-235b-a22b-thinking-2507/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3-235B-A22B-Thinking-2507 Feedly Summary: Qwen3-235B-A22B-Thinking-2507 The third Qwen model release week, following Qwen3-235B-A22B-Instruct-2507 on Monday 21st and Qwen3-Coder-480B-A35B-Instruct on Tuesday 22nd. Those two were both non-reasoning models – a change from the previous models in the Qwen 3 family which combined reasoning and non-reasoning in the same model,…

  • Simon Willison’s Weblog: Qwen/Qwen3-235B-A22B-Instruct-2507

    Source URL: https://simonwillison.net/2025/Jul/22/qwen3-235b-a22b-instruct-2507/#atom-everything Source: Simon Willison’s Weblog Title: Qwen/Qwen3-235B-A22B-Instruct-2507 Feedly Summary: Qwen/Qwen3-235B-A22B-Instruct-2507 Significant new model release from Qwen, published yesterday without much fanfare. This is a follow-up to their April release of the full Qwen 3 model family, which included a Qwen3-235B-A22B model which could handle both reasoning and non-reasoning prompts (via a /no_think toggle).…

  • Simon Willison’s Weblog: Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad

    Source URL: https://simonwillison.net/2025/Jul/21/gemini-imo/#atom-everything Source: Simon Willison’s Weblog Title: Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad Feedly Summary: Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad OpenAI beat them to the punch in terms of publicity by publishing their…

  • AWS News Blog: New Amazon EC2 P6e-GB200 UltraServers accelerated by NVIDIA Grace Blackwell GPUs for the highest AI performance

    Source URL: https://aws.amazon.com/blogs/aws/new-amazon-ec2-p6e-gb200-ultraservers-powered-by-nvidia-grace-blackwell-gpus-for-the-highest-ai-performance/ Source: AWS News Blog Title: New Amazon EC2 P6e-GB200 UltraServers accelerated by NVIDIA Grace Blackwell GPUs for the highest AI performance Feedly Summary: Amazon announces the general availability of EC2 P6e-GB200 UltraServers, powered by NVIDIA Grace Blackwell GB200 superchips that enable up to 72 GPUs with 360 petaflops of computing power for…

  • Slashdot: Simple Text Additions Can Fool Advanced AI Reasoning Models, Researchers Find

    Source URL: https://tech.slashdot.org/story/25/07/04/1521245/simple-text-additions-can-fool-advanced-ai-reasoning-models-researchers-find Source: Slashdot Title: Simple Text Additions Can Fool Advanced AI Reasoning Models, Researchers Find Feedly Summary: AI Summary and Description: Yes Summary: The research highlights a significant vulnerability in state-of-the-art reasoning AI models through the “CatAttack” technique, which attaches irrelevant phrases to math problems, leading to higher error rates and inefficient responses.…

  • Cloud Blog: Tools Make an Agent: From Zero to Assistant with ADK

    Source URL: https://cloud.google.com/blog/topics/developers-practitioners/tools-make-an-agent-from-zero-to-assistant-with-adk/ Source: Cloud Blog Title: Tools Make an Agent: From Zero to Assistant with ADK Feedly Summary: Imagine that you’re a project manager at QuantumRoast, a global coffee machine company. You help your teammates navigate a sea of engineering roadmaps, sudden strategy pivots (we’re doing matcha now!), and incoming tickets from customers— everything…

  • Simon Willison’s Weblog: AbsenceBench: Language Models Can’t Tell What’s Missing

    Source URL: https://simonwillison.net/2025/Jun/20/absencebench/#atom-everything Source: Simon Willison’s Weblog Title: AbsenceBench: Language Models Can’t Tell What’s Missing Feedly Summary: AbsenceBench: Language Models Can’t Tell What’s Missing Here’s another interesting result to file under the “jagged frontier" of LLMs, where their strengths and weaknesses are often unintuitive. Long context models have been getting increasingly good at passing "Needle…

  • Slashdot: Reasoning LLMs Deliver Value Today, So AGI Hype Doesn’t Matter

    Source URL: https://slashdot.org/story/25/06/19/165237/reasoning-llms-deliver-value-today-so-agi-hype-doesnt-matter?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Reasoning LLMs Deliver Value Today, So AGI Hype Doesn’t Matter Feedly Summary: AI Summary and Description: Yes Summary: The commentary by Simon Willison highlights a debate surrounding the effectiveness and applicability of large language models (LLMs), particularly in the context of their limitations and the recent critiques by various…

  • The Register: MiniMax M1 model claims Chinese LLM crown from DeepSeek – plus it’s true open-source

    Source URL: https://www.theregister.com/2025/06/17/minimax_m1_model_chinese_llm/ Source: The Register Title: MiniMax M1 model claims Chinese LLM crown from DeepSeek – plus it’s true open-source Feedly Summary: China’s ‘little dragons’ pose big challenge to US AI firms MiniMax, an AI firm based in Shanghai, has released an open-source reasoning model that challenges Chinese rival DeepSeek and US-based Anthropic, OpenAI,…