Simon Willison’s Weblog: AI-assisted development needs automated tests

Source URL: https://simonwillison.net/2025/May/28/automated-tests/
Source: Simon Willison’s Weblog
Title: AI-assisted development needs automated tests

Feedly Summary: I wonder if one of the reasons I’m finding LLMs so much more useful for coding than a lot of people that I see in online discussions is that effectively all of the code I work on has automated tests.
I’ve been trying to stay true to the idea of a Perfect Commit – one that bundles the implementation, tests and documentation in a single unit – for over five years now. As a result almost every piece of (non vibe-coding) code I work on has pretty comprehensive test coverage.
This massively derisks my use of LLMs. If an LLM writes weird, convoluted code that solves my problem I can prove that it works with tests – and then have it refactor the code until it looks good to me, keeping the tests green the whole time.
LLMs help write the tests, too. I finally have a 24/7 pair programmer who can remember how to use unittest.mock!
Next time someone complains that they’ve found LLMs to be more of a hindrance than a help in their programming work, I’m going to try to remember to ask after the health of their test suite.
Tags: vibe-coding, testing, ai-assisted-programming, generative-ai, ai, llms

AI Summary and Description: Yes

Summary: The text discusses the author’s positive experiences using large language models (LLMs) for coding, emphasizing the importance of having comprehensive automated tests in place. This practice not only enhances the utility of LLMs but also mitigates the risks associated with their use.

Detailed Description: The author shares insights on how their experience with LLMs is significantly enhanced due to the presence of automated testing in their coding workflow. This perspective highlights the interplay between LLMs and secure coding practices, which is particularly relevant for professionals focusing on AI, software security, and DevSecOps. Key points include:

– **Automated Testing**: The author has integrated extensive automated tests into their coding practices, which supports the reliability of LLM-generated code.
– **Perfect Commit Philosophy**: The practice of bundling implementation, tests, and documentation together in a single commit enhances code quality and reduces risks.
– **Risk Mitigation**: Having a robust test suite allows the author to confidently use LLMs, as they can validate the functionality of the generated code. This lowers the likelihood of introducing bugs or vulnerabilities.
– **Collaborative Coding**: LLMs are seen as an effective coding partner that not only helps in writing the code but also assists in generating and managing tests, thereby streamlining the development process.
– **Quality Assurance**: The author notes that having a healthy test suite could be a significant factor in the differing perceptions of LLMs across developers; those with fewer tests may find LLMs more hindering than helpful.

This reflection underscores the critical role that quality assurance practices play when integrating AI tools like LLMs into professional software development, making it a vital consideration for DevSecOps and related domains.