Simon Willison’s Weblog: Quoting Benedict Evans

Source URL: https://simonwillison.net/2025/Feb/2/benedict-evans/#atom-everything
Source: Simon Willison’s Weblog
Title: Quoting Benedict Evans

Feedly Summary: Part of the concept of ‘Disruption’ is that important new technologies tend to be bad at the things that matter to the previous generation of technology, but they do something else important instead. Asking if an LLM can do very specific and precise information retrieval might be like asking if an Apple II can match the uptime of a mainframe, or asking if you can build Photoshop inside Netscape. No, they can’t really do that, but that’s not the point and doesn’t mean they’re useless. They do something else, and that ‘something else’ matters more and pulls in all of the investment, innovation and company creation. Maybe, 20 years later, they can do the old thing too – maybe you can run a bank on PCs and build graphics software in a browser, eventually – but that’s not what matters at the beginning. They unlock something else.
What is that ‘something else’ for generative AI, though? How do you think conceptually about places where that error rate is a feature, not a bug?
— Benedict Evans, Are better models better?
Tags: benedict-evans, llms, ai, generative-ai

AI Summary and Description: Yes

Summary: The text discusses the concept of ‘Disruption’ in the context of new technologies, particularly focusing on how emerging technologies, such as large language models (LLMs) and generative AI, may excel in areas distinct from traditional technological capabilities. It emphasizes the importance of understanding the value that these technologies bring, despite their shortcomings in established tasks. Key insights revolve around innovation and the transformative potential of generative AI.

Detailed Description:

The text reflects on the disruptive nature of emerging technologies and positions LLMs and generative AI within this framework. It provokes a conversation about the expectations placed on new tools versus their actual potential and value.

– **Comparison to Historical Technologies**:
– The author draws parallels between today’s technology (LLMs, generative AI) and historical technology (e.g., Apple II vs. mainframe computers) to illustrate how new advancements often have limitations in areas where older technologies exceled.
– Just as early personal computers could not match the reliability of mainframe systems, LLMs may struggle with precise information retrieval compared to traditional systems.

– **Focus on What Matters**:
– The core argument suggests that while LLMs may not perform certain tasks as well as previous technologies, they unlock new possibilities that are transformative.
– The emphasis is on the ‘something else’ that generative AI offers, pointing to its creative potential, efficiency in tasks, or capability to assist in areas where traditional tools fall short.

– **Innovation Over Perfection**:
– The text encourages a paradigm shift in how technological performance is evaluated. An important idea is that an error rate can sometimes be a ‘feature, not a bug,’ highlighting scenarios where flexibility and creativity outweigh the precision.

– **Future Potential**:
– The author proposes that while current capabilities may not include performing traditional tasks perfectly, the trajectory of technology suggests that eventually, these new tools could evolve to handle those tasks too.

This discussion is particularly relevant for professionals in AI and infrastructure security who need to understand not only the current limitations but also the future implications and innovations that generative AI and LLMs can bring to their fields. Recognizing these evolving capabilities can inform strategic planning, risk management, and investment in new technologies.