Simon Willison’s Weblog: My LLM codegen workflow atm

Source URL: https://simonwillison.net/2025/Feb/21/my-llm-codegen-workflow-atm/#atom-everything
Source: Simon Willison’s Weblog
Title: My LLM codegen workflow atm

Feedly Summary: My LLM codegen workflow atm
Harper Reed describes his workflow for writing code with the assistance of LLMs.
This is clearly a very well-thought out process, which has evolved a lot already and continues to change.
Harper starts greenfield projects with a brainstorming step, aiming to produce a detailed spec:

Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let’s do this iteratively and dig into every relevant detail. Remember, only one question at a time.

The end result is saved as spec.md in the repo. He then uses a reasoning model (o3 or similar) to produce an accompanying prompt_plan.md with LLM-generated prompts for the different steps, plus a todo.md with lower-level steps. Code editing models can check things off in this list as they continue, a neat hack for persisting state between multiple model calls.
Harper has tried this pattern with a bunch of different models and tools, but currently defaults to copy-and-paste to Claude assisted by repomix (a similar tool to my own files-to-prompt) for most of the work.
How well has this worked?

My hack to-do list is empty because I built everything. I keep thinking of new things and knocking them out while watching a movie or something. For the first time in years, I am spending time with new programming languages and tools. This is pushing me to expand my programming perspective.

There’s a bunch more in there about using LLMs with existing large projects, including several extremely useful example prompts.
Harper ends with this call to actions for the wider community:

I have spent years coding by myself, years coding as a pair, and years coding in a team. It is always better with people. These workflows are not easy to use as a team. The bots collide, the merges are horrific, the context complicated.
I really want someone to solve this problem in a way that makes coding with an LLM a multiplayer game. Not a solo hacker experience. There is so much opportunity to fix this and make it amazing.

Via Hacker News
Tags: prompt-engineering, ai-assisted-programming, generative-ai, ai, llms

AI Summary and Description: Yes

Summary: The text discusses a code generation workflow leveraging LLMs (Large Language Models) designed by Harper Reed. His iterative approach emphasizes collaboration, detailed specification, and incremental development, while highlighting challenges faced when integrating LLMs into team-based programming environments. This insight is particularly relevant for professionals focused on AI, MLOps, and generative AI security.

Detailed Description: The content describes Harper Reed’s innovative approach to code generation using LLMs. His workflow reflects a structured process aimed at enhancing productivity and collaborative development. Key highlights include:

– **Iterative Specification Development**:
– Begins with brainstorming to create a detailed specification for programming ideas.
– Uses a question-and-answer format to derive detailed specs, promoting clarity and precision.

– **Utilization of Reasoning Models and LLMs**:
– Employs reasoning models, such as o3, to generate prompt plans and low-level task lists that assist in code development.
– Code editing models monitor progress and manage states across requests.

– **Flexibility in Tools**:
– Harper has experimented with various models but currently prefers a copy-and-paste method to Claude with repomix for code generation tasks.
– His adaptability showcases the evolving nature of tools in the AI-assisted programming space.

– **Continual Learning and Exploration**:
– Highlights the productivity gains and renewed interest in exploring new programming languages and tools, facilitated by LLMs.

– **Community Collaboration Challenges**:
– Acknowledges difficulties in using LLMs within team settings—issues include merging code efficiently and managing context.
– Calls for innovation to make coding with LLMs more of a collaborative, “multiplayer” experience rather than a solitary task.

– **Opportunities Identified**:
– Emphasizes the need for solutions to improve collaborative workflows when using LLMs, indicating a market gap for development.

This analysis not only underlines the technological advancements made in AI-assisted programming but also presents vital considerations for security and compliance professionals focusing on how LLMs can be securely integrated into existing development workflows. It also hints at a forthcoming need for governance and standards as collaborative programming with AI tools becomes more prevalent in enterprise environments.