Simon Willison’s Weblog: Coding with LLMs in the summer of 2025 (an update)

Source URL: https://simonwillison.net/2025/Jul/21/coding-with-llms/#atom-everything
Source: Simon Willison’s Weblog
Title: Coding with LLMs in the summer of 2025 (an update)

Feedly Summary: Coding with LLMs in the summer of 2025 (an update)
Salvatore Sanfilippo describes his current AI-assisted development workflow. He’s all-in on LLMs for code review, exploratory prototyping, pair-design and writing “part of the code under your clear specifications", but warns against leaning too hard on pure vibe coding:

But while LLMs can write part of a code base with success (under your strict supervision, see later), and produce a very sensible speedup in development (or, the ability to develop more/better in the same time used in the past — which is what I do), when left alone with nontrivial goals they tend to produce fragile code bases that are larger than needed, complex, full of local minima choices, suboptimal in many ways. Moreover they just fail completely when the task at hand is more complex than a given level.

There are plenty of useful tips in there, especially around carefully managing your context:

When your goal is to reason with an LLM about implementing or fixing some code, you need to provide extensive information to the LLM: papers, big parts of the target code base (all the code base if possible, unless this is going to make the context window so large than the LLM performances will be impaired). And a brain dump of all your understanding of what should be done.

Salvatore warns against relying too hard on tools which hide the context for you, like editors with integrated coding agents. He prefers pasting exactly what’s needed into the LLM web interface – I share his preference there.
His conclusions here match my experience:

You will be able to do things that are otherwise at the borders of your knowledge / expertise while learning much in the process (yes, you can learn from LLMs, as you can learn from books or colleagues: it is one of the forms of education possible, a new one). Yet, everything produced will follow your idea of code and product, and will be of high quality and will not random fail because of errors and shortcomings introduced by the LLM. You will also retain a strong understanding of all the code written and its design.

Via Hacker News
Tags: salvatore-sanfilippo, ai, generative-ai, llms, ai-assisted-programming, vibe-coding

AI Summary and Description: Yes

Summary: The text discusses the evolving role of Large Language Models (LLMs) in software development, emphasizing their applications and limitations in code generation. It provides insights into how LLMs can be effectively integrated into coding workflows while urging caution against over-reliance on these tools without proper context management.

Detailed Description: The text by Salvatore Sanfilippo outlines important aspects of utilizing LLMs for software development in 2025. Key points of discussion include:

– **AI-assisted Development Workflow**: The author is fully leveraging LLMs for various coding purposes, including:
– **Code review**: Enhancing existing code quality through AI analysis.
– **Exploratory prototyping**: Quickly generating and testing ideas without initial burden.
– **Pair-design**: Collaboratively designing code alongside AI.
– **Code generation**: Writing code based on clear specifications.

– **Warnings on LLM Limitations**: Sanfilippo cautions against:
– **Over-reliance on LLMs**: While they can significantly speed up development, they may result in poor-quality code if not managed correctly. He notes that LLMs tend to:
– Create fragile code bases.
– Generate overly complex or suboptimal code.
– Fail on complex tasks beyond their capability.

– **Effective Context Management**: The author emphasizes the need for comprehensive input when interacting with LLMs, including:
– Providing detailed background information and relevant project documentation.
– Sharing substantial portions of the codebase, being cautious of context window limitations which can hinder LLM performance.

– **Personal Preference for Direct Interaction**: Sanfilippo prefers pasting exact code or specifications into LLM interfaces instead of using tools that obscure context. This approach maintains quality and clarity, ensuring that the developer retains control over the coding process.

– **Learning and Quality Assurance**: The use of LLMs is viewed as a new educational tool:
– Developers can learn from the assistance of LLMs, similar to traditional educational methods.
– Emphasis on producing quality code that aligns with developers’ expectations and expertise while ensuring a deep understanding of the code written.

In summary, this discussion highlights the dual nature of LLMs in software development, showcasing both their potential and the critical considerations for effective use. Security and compliance experts in AI and software development should take note of these insights as they navigate the implications of integrating such technologies into their workflows.