Tag: mollick

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2025/Aug/9/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: The issue with GPT-5 in a nutshell is that unless you pay for model switching & know to use GPT-5 Thinking or Pro, when you ask “GPT-5” you sometimes get the best available AI & sometimes get one of the worst AIs…

  • Simon Willison’s Weblog: llvm: InstCombine: improve optimizations for ceiling division with no overflow – a PR by Alex Gaynor and Claude Code

    Source URL: https://simonwillison.net/2025/Jun/30/llvm/ Source: Simon Willison’s Weblog Title: llvm: InstCombine: improve optimizations for ceiling division with no overflow – a PR by Alex Gaynor and Claude Code Feedly Summary: llvm: InstCombine: improve optimizations for ceiling division with no overflow – a PR by Alex Gaynor and Claude Code Alex Gaynor maintains rust-asn1, and recently spotted…

  • Simon Willison’s Weblog: Building software on top of Large Language Models

    Source URL: https://simonwillison.net/2025/May/15/building-on-llms/#atom-everything Source: Simon Willison’s Weblog Title: Building software on top of Large Language Models Feedly Summary: I presented a three hour workshop at PyCon US yesterday titled Building software on top of Large Language Models. The goal of the workshop was to give participants everything they needed to get started writing code that…

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2025/Apr/20/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: In some tasks, AI is unreliable. In others, it is superhuman. You could, of course, say the same thing about calculators, but it is also clear that AI is different. It is already demonstrating general capabilities and performing a wide range of…

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2025/Mar/2/ethan-mollick/ Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: After publishing this piece, I was contacted by Anthropic who told me that Sonnet 3.7 would not be considered a 10^26 FLOP model and cost a few tens of millions of dollars to train, though future models will be much bigger. —…

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2024/Dec/10/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: Knowing when to use AI turns out to be a form of wisdom, not just technical knowledge. Like most wisdom, it’s somewhat paradoxical: AI is often most useful where we’re already expert enough to spot its mistakes, yet least helpful in the…

  • Simon Willison’s Weblog: Quoting Ethan Mollick

    Source URL: https://simonwillison.net/2024/Dec/7/ethan-mollick/#atom-everything Source: Simon Willison’s Weblog Title: Quoting Ethan Mollick Feedly Summary: A test of how seriously your firm is taking AI: when o-1 (& the new Gemini) came out this week, were there assigned folks who immediately ran the model through internal, validated, firm-specific benchmarks to see how useful it as? Did you…