Tag: thinking

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2025/Aug/9/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: The issue with GPT-5 in a nutshell is that unless you pay for model switching & know to use GPT-5 Thinking or Pro, when you ask “GPT-5” you sometimes get the best available AI & sometimes get one of the worst AIs…

  • Simon Willison’s Weblog: Quoting Sam Altman

    Source URL: https://simonwillison.net/2025/Aug/8/sam-altman/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Sam Altman Feedly Summary: GPT-5 rollout updates: We are going to double GPT-5 rate limits for ChatGPT Plus users as we finish rollout. We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer…

  • Simon Willison’s Weblog: GPT-5: Key characteristics, pricing and model card

    Source URL: https://simonwillison.net/2025/Aug/7/gpt-5/#atom-everything Source: Simon Willison’s Weblog Title: GPT-5: Key characteristics, pricing and model card Feedly Summary: I’ve had preview access to the new GPT-5 model family for the past two weeks, and have been using GPT-5 as my daily-driver. It’s my new favorite model. It’s still an LLM – it’s not a dramatic departure…

  • Simon Willison’s Weblog: Qwen3-4B Instruct and Thinking

    Source URL: https://simonwillison.net/2025/Aug/6/qwen3-4b-instruct-and-thinking/ Source: Simon Willison’s Weblog Title: Qwen3-4B Instruct and Thinking Feedly Summary: Qwen3-4B Instruct and Thinking Yet another interesting model from Qwen—these are tiny compared to their other recent releases (just 4B parameters, 7.5GB on Hugging Face and even smaller when quantized) but with a 262,144 context length, which Qwen suggest is essential…

  • Simon Willison’s Weblog: OpenAI’s new open weight (Apache 2) models are really good

    Source URL: https://simonwillison.net/2025/Aug/5/gpt-oss/ Source: Simon Willison’s Weblog Title: OpenAI’s new open weight (Apache 2) models are really good Feedly Summary: The long promised OpenAI open weight models are here, and they are very impressive. They’re available under proper open source licenses – Apache 2.0 – and come in two sizes, 120B and 20B. OpenAI’s own…

  • Simon Willison’s Weblog: ChatGPT agent’s user-agent

    Source URL: https://simonwillison.net/2025/Aug/4/chatgpt-agents-user-agent/#atom-everything Source: Simon Willison’s Weblog Title: ChatGPT agent’s user-agent Feedly Summary: I was exploring how ChatGPT agent works today. I learned some interesting things about how it exposes its identity through HTTP headers, then made a huge blunder in thinking it was leaking its URLs to Bingbot and Yandex… but it turned out…

  • Cloud Blog: How Cake, Vietnam’s leading digital bank, found the right mix of simplicity and security with ChromeOS and Chrome Enterprise

    Source URL: https://cloud.google.com/blog/products/chrome-enterprise/how-cake-vietnams-leading-digital-bank-found-the-right-mix-of-simplicity-and-security-with-chromeos-and-chrome-enterprise/ Source: Cloud Blog Title: How Cake, Vietnam’s leading digital bank, found the right mix of simplicity and security with ChromeOS and Chrome Enterprise Feedly Summary: Editor’s note: Today’s post is by Hiển Từ Thế (Jay), Chief Technology Officer for Cake Digital Bank, a prominent digital-only bank in Vietnam offering a comprehensive suite…

  • Simon Willison’s Weblog: XBai o4

    Source URL: https://simonwillison.net/2025/Aug/3/xbai-o4/#atom-everything Source: Simon Willison’s Weblog Title: XBai o4 Feedly Summary: XBai o4 Yet another open source (Apache 2.0) LLM from a Chinese AI lab. This model card claims: XBai o4 excels in complex reasoning capabilities and has now completely surpassed OpenAI-o3-mini in Medium mode. This a 32.8 billion parameter model released by MetaStone…