Tag: Tags:

  • Simon Willison’s Weblog: Quoting Ted Sanders

    Source URL: https://simonwillison.net/2025/Jun/11/ted-sanders/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ted Sanders Feedly Summary: [on the cheaper o3] Not quantized. Weights are the same. If we did change the model, we’d release it as a new model with a new name in the API (e.g., o3-turbo-2025-06-10). It would be very annoying to API customers if we…

  • Simon Willison’s Weblog: Quoting Sam Altman

    Source URL: https://simonwillison.net/2025/Jun/10/sam-altman/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Sam Altman Feedly Summary: (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes.…

  • Simon Willison’s Weblog: AI-assisted coding for teams that can’t get away with vibes

    Source URL: https://simonwillison.net/2025/Jun/10/ai-assisted-coding/#atom-everything Source: Simon Willison’s Weblog Title: AI-assisted coding for teams that can’t get away with vibes Feedly Summary: AI-assisted coding for teams that can’t get away with vibes This excellent piece by Atharva Raykar offers a bunch of astute observations on AI-assisted development that I haven’t seen written down elsewhere. Building with AI…

  • Simon Willison’s Weblog: o3-pro

    Source URL: https://simonwillison.net/2025/Jun/10/o3-pro/ Source: Simon Willison’s Weblog Title: o3-pro Feedly Summary: o3-pro OpenAI released o3-pro today, which they describe as a “version of o3 with more compute for better responses". It’s only available via the newer Responses API. I’ve added it to my llm-openai-plugin plugin which uses that new API, so you can try it…

  • Simon Willison’s Weblog: WWDC: Apple supercharges its tools and technologies for developers

    Source URL: https://simonwillison.net/2025/Jun/9/apple-wwdc/#atom-everything Source: Simon Willison’s Weblog Title: WWDC: Apple supercharges its tools and technologies for developers Feedly Summary: WWDC: Apple supercharges its tools and technologies for developers Here’s the Apple press release for today’s WWDC announcements. Two things that stood out to me: Foundation Models Framework With the Foundation Models framework, developers will be…

  • Simon Willison’s Weblog: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth

    Source URL: https://simonwillison.net/2025/Jun/9/openai-revenue/#atom-everything Source: Simon Willison’s Weblog Title: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth Feedly Summary: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth Noteworthy because OpenAI revenue is a useful indicator of the direction of the generative AI industry in general, and frequently comes…

  • Simon Willison’s Weblog: Quoting David Crawshaw

    Source URL: https://simonwillison.net/2025/Jun/9/david-crawshaw/#atom-everything Source: Simon Willison’s Weblog Title: Quoting David Crawshaw Feedly Summary: The process of learning and experimenting with LLM-derived technology has been an exercise in humility. In general I love learning new things when the art of programming changes […] But LLMs, and more specifically Agents, affect the process of writing programs in…

  • Simon Willison’s Weblog: Qwen3 Embedding

    Source URL: https://simonwillison.net/2025/Jun/8/qwen3-embedding/#atom-everything Source: Simon Willison’s Weblog Title: Qwen3 Embedding Feedly Summary: Qwen3 Embedding New family of embedding models from Qwen, in three sizes: 0.6B, 4B, 8B – and two categories: Text Embedding and Text Reranking. The full collection can be browsed on Hugging Face. The smallest available model is the 0.6B Q8 one, which…

  • Simon Willison’s Weblog: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed text

    Source URL: https://simonwillison.net/2025/Jun/7/comma/#atom-everything Source: Simon Willison’s Weblog Title: Comma v0.1 1T and 2T – 7B LLMs trained on openly licensed text Feedly Summary: It’s been a long time coming, but we finally have some promising LLMs to try out which are trained entirely on openly licensed text! EleutherAI released the Pile four and a half…