Simon Willison’s Weblog: Models can prompt now

Source URL: https://simonwillison.net/2025/Sep/14/models-can-prompt/#atom-everything
Source: Simon Willison’s Weblog
Title: Models can prompt now

Feedly Summary: Here’s an interesting example of models incrementally improving over time: I am finding that today’s leading models are competent at writing prompts for themselves and each other.
A year ago I was quite skeptical of the pattern where models are used to help build prompts. Prompt engineering was still a young enough discipline that I did not expect the models to have enough training data to be able to prompt themselves better than a moderately experienced human.
The Claude 4 and GPT-5 families both have training cut-off dates within the past year – recent enough that they’ve seen a decent volume of good prompting examples.
I expect they have also been deliberately trained for this. Anthropic make extensive use of sub-agent patterns in Claude Code, and published a fascinating article on that pattern (my notes on that).
I don’t have anything solid to back this up – it’s more of a hunch based on anecdotal evidence where various of my requests for a model to write a prompt have returned useful results over the last few months.
Tags: prompt-engineering, llms, ai, generative-ai, gpt-5, anthropic, claude, claude-code, claude-4

AI Summary and Description: Yes

Summary: The text discusses advancements in AI models, specifically their ability to generate effective prompts for themselves and one another. It highlights the evolution of prompt engineering alongside recent developments in models like Claude 4 and GPT-5, which have shown significant competency in this area due to ample training data and deliberate design.

Detailed Description: The content provides insights into the growing capabilities of AI models concerning prompt generation. It illustrates the trend of AI models, such as Claude 4 and GPT-5, becoming increasingly self-sufficient through the process of prompt engineering—an area that has rapidly matured over the last year.

Key points include:

– **Improvement Over Time**: The text notes a shift in capability where AI models are better at generating prompts, marking a significant advancement in the technology.

– **Skepticism of Previous Capabilities**: The author reflects on previous doubts regarding models’ ability to optimize prompt generation compared to human expertise, suggesting that this skepticism has been challenged by recent developments.

– **Training Data Significance**: The mention of recent training cut-off dates for models like GPT-5 emphasizes the importance of having sufficient and relevant data to improve performance.

– **Deliberate Training Methods**: The reference to Anthropic’s Claude 4 leveraging sub-agent patterns suggests that deliberate training strategies are crucial for enhancing generative capabilities in AI.

– **Anecdotal Observations**: The text relies on anecdotal evidence to suggest the efficacy of prompt generation by AI models, indicating that this area could be ripe for further exploration and validation.

This analysis is particularly relevant for AI and Generative AI Security professionals looking to understand the implications of advanced model training and prompt engineering on security practices and operational integrity in AI implementations.