Tag: model performance

  • Simon Willison’s Weblog: Anthropic status: Model output quality

    Source URL: https://simonwillison.net/2025/Sep/9/anthropic-model-output-quality/ Source: Simon Willison’s Weblog Title: Anthropic status: Model output quality Feedly Summary: Anthropic status: Model output quality Anthropic previously reported model serving bugs that affected Claude Opus 4 and 4.1 for 56.5 hours. They’ve now fixed additional bugs affecting “a small percentage" of Sonnet 4 requests for almost a month, plus a…

  • Simon Willison’s Weblog: Claude Opus 4.1 and Opus 4 degraded quality

    Source URL: https://simonwillison.net/2025/Aug/30/claude-degraded-quality/#atom-everything Source: Simon Willison’s Weblog Title: Claude Opus 4.1 and Opus 4 degraded quality Feedly Summary: Claude Opus 4.1 and Opus 4 degraded quality Notable because often when people complain of degraded model quality it turns out to be unfounded – Anthropic in the past have emphasized that they don’t change the model…

  • The Register: Tinker with LLMs in the privacy of your own home using Llama.cpp

    Source URL: https://www.theregister.com/2025/08/24/llama_cpp_hands_on/ Source: The Register Title: Tinker with LLMs in the privacy of your own home using Llama.cpp Feedly Summary: Everything you need to know to build, run, serve, optimize and quantize models on your PC Hands on Training large language models (LLMs) may require millions or even billion of dollars of infrastructure, but…

  • Wired: Do Large Language Models Dream of AI Agents?

    Source URL: https://www.wired.com/story/sleeptime-compute-chatbots-memory/ Source: Wired Title: Do Large Language Models Dream of AI Agents? Feedly Summary: For AI models, knowing what to remember might be as important as knowing what to forget. Welcome to the era of “sleeptime compute.” AI Summary and Description: Yes Summary: The text introduces the concept of “sleeptime compute,” which emphasizes…

  • Simon Willison’s Weblog: Open weight LLMs exhibit inconsistent performance across providers

    Source URL: https://simonwillison.net/2025/Aug/15/inconsistent-performance/ Source: Simon Willison’s Weblog Title: Open weight LLMs exhibit inconsistent performance across providers Feedly Summary: Artificial Analysis published a new benchmark the other day, this time focusing on how an individual model – OpenAI’s gpt-oss-120b – performs across different hosted providers. The results showed some surprising differences. Here’s the one with the…

  • Simon Willison’s Weblog: Quoting Sam Altman

    Source URL: https://simonwillison.net/2025/Aug/8/sam-altman/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Sam Altman Feedly Summary: GPT-5 rollout updates: We are going to double GPT-5 rate limits for ChatGPT Plus users as we finish rollout. We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer…

  • Wired: OpenAI Just Released Its First Open-Weight Models Since GPT-2

    Source URL: https://www.wired.com/story/openai-just-released-its-first-open-weight-models-since-gpt-2/ Source: Wired Title: OpenAI Just Released Its First Open-Weight Models Since GPT-2 Feedly Summary: The models, gpt-oss-120b and gpt-oss-20b, represent a major shift for the AI company. AI Summary and Description: Yes Summary: The text references the introduction of two new models, gpt-oss-120b and gpt-oss-20b, which can have significant implications for the…

  • Cloud Blog: Announcements for AI Hypercomputer: The latest infrastructure news for ML practitioners

    Source URL: https://cloud.google.com/blog/products/ai-machine-learning/q2-2025-ai-hypercomputer-updates/ Source: Cloud Blog Title: Announcements for AI Hypercomputer: The latest infrastructure news for ML practitioners Feedly Summary: Curious about the latest in AI infrastructure from Google Cloud? Every three months we share a roundup of the latest AI Hypercomputer news, resources, events, learning opportunities, and more. Read on to learn new ways…

  • Simon Willison’s Weblog: XBai o4

    Source URL: https://simonwillison.net/2025/Aug/3/xbai-o4/#atom-everything Source: Simon Willison’s Weblog Title: XBai o4 Feedly Summary: XBai o4 Yet another open source (Apache 2.0) LLM from a Chinese AI lab. This model card claims: XBai o4 excels in complex reasoning capabilities and has now completely surpassed OpenAI-o3-mini in Medium mode. This a 32.8 billion parameter model released by MetaStone…