Source URL: https://simonwillison.net/2025/Oct/7/vibe-engineering/#atom-everything
Source: Simon Willison’s Weblog
Title: Vibe engineering
Feedly Summary: I feel like vibe coding is pretty well established now as covering the fast, loose and irresponsible way of building software with AI – entirely prompt-driven, and with no attention paid to how the code actually works. This leaves us with a terminology gap: what should we call the other end of the spectrum, where seasoned professionals accelerate their work with LLMs while staying proudly and confidently accountable for the software they produce?
I propose we call this vibe engineering, with my tongue only partially in my cheek.
One of the lesser spoken truths of working productively with LLMs as a software engineer on non-toy-projects is that it’s difficult. There’s a lot of depth to understanding how to use the tools, there are plenty of traps to avoid, and the pace at which they can churn out working code raises the bar for what the human participant can and should be contributing.
The rise of coding agents – tools like Claude Code (released February 2025), OpenAI’s Codex CLI (April) and Gemini CLI (June) that can iterate on code, actively testing and modifying it until it achieves a specified goal, has dramatically increased the usefulness of LLMs for real-world coding problems.
I’m increasingly hearing from experienced, credible software engineers who are running multiple copies of agents at once, tackling several problems in parallel and expanding the scope of what they can take on. I was skeptical of this at first but I’ve started running multiple agents myself now and it’s surprisingly effective, if mentally exhausting!
This feels very different from classic vibe coding, where I outsource a simple, low-stakes task to an LLM and accept the result if it appears to work. Most of my tools.simonwillison.net collection (previously) were built like that. Iterating with coding agents to produce production-quality code that I’m confident I can maintain in the future feels like a different process entirely.
It’s also become clear to me that LLMs actively reward existing top tier software engineering practices:
Automated testing. If your project has a robust, comprehensive and stable test suite agentic coding tools can fly with it. Without tests? Your agent might claim something works without having actually tested it at all, plus any new change could break an unrelated feature without you realizing it. Test-first development is particularly effective with agents that can iterate in a loop.
Planning in advance. Sitting down to hack something together goes much better if you start with a high level plan. Working with an agent makes this even more important – you can iterate on the plan first, then hand it off to the agent to write the code.
Comprehensive documentation. Just like human programmers, an LLM can only keep a subset of the codebase in its context at once. Being able to feed in relevant documentation lets it use APIs from other areas without reading the code first. Write good documentation first and the model may be able to build the matching implementation from that input alone.
Good version control habits. Being able to undo mistakes and understand when and how something was changed is even more important when a coding agent might have made the changes. LLMs are also fiercely competent at Git – they can navigate the history themselves to track down the origin of bugs, and they’re better than most developers at using git bisect. Use that to your advantage.
Having effective automation in place. Continuous integration, automated formatting and linting, continuous deployment to a preview environment – all things that agentic coding tools can benefit from too. LLMs make writing quick automation scripts easier as well, which can help them then repeat tasks accurately and consistently next time.
A culture of code review. This one explains itself. If you’re fast and productive at code review you’re going to have a much better time working with LLMs than if you’d rather write code yourself than review the same thing written by someone (or something) else.
A very weird form of management. Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce. It’s a lot easier than working with actual people because you don’t have to worry about offending or discouraging them – but any existing management experience you have will prove surprisingly useful.
Really good manual QA (quality assurance). Beyond automated tests, you need to be really good at manually testing software, including predicting and digging into edge-cases.
Strong research skills. There are dozens of ways to solve any given coding problem. Figuring out the best options and proving an approach has always been important, and remains a blocker on unleashing an agent to write the actual code.
The ability to ship to a preview environment. If an agent builds a feature, having a way to safely preview that feature (without deploying it straight to production) makes reviews much more productive and greatly reduces the risk of shipping something broken.
An instinct for what can be outsourced to AI and what you need to manually handle yourself. This is constantly evolving as the models and tools become more effective. A big part of working effectively with LLMs is maintaining a strong intuition for when they can best be applied.
An updated sense of estimation. Estimating how long a project will take has always been one of the hardest but most important parts of being a senior engineer, especially in organizations where budget and strategy decisions are made based on those estimates. AI-assisted coding makes this even harder – things that used to take a long time are much faster, but estimations now depend on new factors which we’re all still trying to figure out.
If you’re going to really exploit the capabilities of these new tools, you need to be operating at the top of your game. You’re not just responsible for writing the code – you’re researching approaches, deciding on high-level architecture, writing specifications, defining success criteria, designing agentic loops, planning QA, managing a growing army of weird digital interns who will absolutely cheat if you give them a chance, and spending so much time on code review.
Almost all of these are characteristics of senior software engineers already!
AI tools amplify existing expertise. The more skills and experience you have as a software engineer the faster and better the results you can get from working with LLMs and coding agents.
“Vibe engineering", really?
Is this a stupid name? Yeah, probably. "Vibes" as a concept in AI feels a little tired at this point. "Vibe coding" itself is used by a lot of developers in a dismissive way. I’m ready to reclaim vibes for something more constructive.
I’ve never really liked the artificial distinction between "coders" and "engineers" – that’s always smelled to me a bit like gatekeeping. But in this case a bit of gatekeeping is exactly what we need!
Vibe engineering establishes a clear distinction from vibe coding. It signals that this is a different, harder and more sophisticated way of working with AI tools to build production software.
I also like that this is cheeky and likely to be controversial. This whole space is still absurd in all sorts of different ways. We shouldn’t take ourselves too seriously while we figure out the most productive ways to apply these new tools.
Tags: definitions, software-engineering, ai, generative-ai, llms, ai-assisted-programming, vibe-coding, coding-agents
AI Summary and Description: Yes
Summary: The text introduces the concept of “vibe engineering,” a term for utilizing large language models (LLMs) effectively in software development while maintaining accountability and code quality. It contrasts this approach with the more irresponsible “vibe coding” and discusses the competencies required to succeed in this evolving landscape, emphasizing the need for robust engineering practices alongside the use of LLMs.
Detailed Description:
– The text critiques the current practice of “vibe coding,” which is characterized by a haphazard, prompt-driven development approach using AI where developers lack accountability for the software produced.
– The author suggests the term “vibe engineering” for a more disciplined approach that integrates seasoned software engineering practices with LLM tools, highlighting the complexity of leveraging LLMs effectively.
– Key points discussed in the text include:
* **Coding Agents**: The emergence of advanced coding agents (like Claude Code and OpenAI’s Codex CLI) that can autonomously test and modify code increases their utility for producing high-quality software.
* **Challenges with LLMs**: Software engineers face substantial challenges when using LLMs for complex projects, necessitating a deep understanding of both the tools and best programming practices.
* **Necessary best practices**:
– **Automated testing**: Importance of having a strong test suite for code reliability.
– **Planning**: The necessity of having a work plan for more effective outcomes in collaboration with LLMs.
– **Documentation and Version Control**: Highlighting the need for thorough documentation and good version control practices to streamline collaboration with coding agents.
– **Automation and Culture of Review**: Organizations must have effective automated systems and a culture of continuous code review to leverage LLM capabilities efficiently.
* **Management Dynamics**: Working with coding agents feels similar to collaboration with human peers, requiring clear instructions and contextual understanding.
* **Quality Assurance and Research**: Emphasizes having strong QA processes and research skills when utilizing LLMs for coding tasks.
* **Evolving Expertise**: Successful integration of LLMs in coding practices demands experienced engineers to take charge, enabling quality outputs through strategic use of AI tools.
* **Estimation Challenges**: AI’s ability to accelerate coding complicates project time estimations—a crucial skill in engineering roles that impacts budget and strategy.
– Ultimately, the text highlights that “vibe engineering” represents a sophisticated and accountable approach to AI-assisted programming, contrasting sharply with the pitfalls of “vibe coding.” This new paradigm signals a shift in how professionals should engage with LLMs to produce maintainable and high-quality software while fostering a culture that balances innovation with diligence.