Tag: inference scaling

  • Hacker News: SOTA on swebench-verified: relearning the bitter lesson

    Source URL: https://aide.dev/blog/sota-bitter-lesson Source: Hacker News Title: SOTA on swebench-verified: relearning the bitter lesson Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses advancements in AI, particularly around leveraging large language models (LLMs) for software engineering challenges through novel approaches such as test-time inference scaling. It emphasizes the key insight that scaling…

  • Simon Willison’s Weblog: December in LLMs has been a lot

    Source URL: https://simonwillison.net/2024/Dec/20/december-in-llms-has-been-a-lot/#atom-everything Source: Simon Willison’s Weblog Title: December in LLMs has been a lot Feedly Summary: I had big plans for December: for one thing, I was hoping to get to an actual RC of Datasette 1.0, in preparation for a full release in January. Instead, I’ve found myself distracted by a constant barrage…

  • Simon Willison’s Weblog: Gemini 2.0 Flash "Thinking mode"

    Source URL: https://simonwillison.net/2024/Dec/19/gemini-thinking-mode/#atom-everything Source: Simon Willison’s Weblog Title: Gemini 2.0 Flash "Thinking mode" Feedly Summary: Those new model releases just keep on flowing. Today it’s Google’s snappily named gemini-2.0-flash-thinking-exp, their first entrant into the o1-style inference scaling class of models. I posted about a great essay about the significance of these just this morning. From…

  • Simon Willison’s Weblog: Is AI progress slowing down?

    Source URL: https://simonwillison.net/2024/Dec/19/is-ai-progress-slowing-down/#atom-everything Source: Simon Willison’s Weblog Title: Is AI progress slowing down? Feedly Summary: Is AI progress slowing down? This piece by Arvind Narayanan and Sayash Kapoor is the single most insightful essay about AI and LLMs I’ve seen in a long time. It’s long and worth reading every inch of it – it…