Simon Willison’s Weblog: The "think" tool: Enabling Claude to stop and think in complex tool use situations

Source URL: https://simonwillison.net/2025/Mar/21/the-think-tool/#atom-everything
Source: Simon Willison’s Weblog
Title: The "think" tool: Enabling Claude to stop and think in complex tool use situations

Feedly Summary: The “think" tool: Enabling Claude to stop and think in complex tool use situations
Fascinating new prompt engineering trick from Anthropic. They use their standard tool calling mechanism to define a tool called "think" that looks something like this:
{
"name": "think",
"description": "Use the tool to think about something. It will not obtain new information or change the database, but just append the thought to the log. Use it when complex reasoning or some cache memory is needed.",
"input_schema": {
"type": "object",
"properties": {
"thought": {
"type": "string",
"description": "A thought to think about."
}
},
"required": ["thought"]
}
}
This tool does nothing at all.
LLM tools (like web_search) usually involve some kind of implementation – the model requests a tool execution, then an external harness goes away and executes the specified tool and feeds the result back into the conversation.
The "think" tool is a no-op – there is no implementation, it just allows the model to use its existing training in terms of when-to-use-a-tool to stop and dump some additional thoughts into the context.
This works completely independently of the new "thinking" mechanism introduced in Claude 3.7 Sonnet.
Anthropic’s benchmarks show impressive improvements from enabling this tool. I fully anticipate that models from other providers would benefit from the same trick.
Via @alexalbert__
Tags: prompt-engineering, anthropic, claude, generative-ai, ai, llms, llm-tool-use

AI Summary and Description: Yes

Summary: The text discusses a novel prompt engineering strategy implemented by Anthropic, involving a simple tool named “think.” This tool facilitates complex reasoning by allowing the Claude AI model to pause and reflect, enhancing its performance in various tasks. This innovation has significant implications for AI and specifically for the development and optimization of large language models (LLMs).

Detailed Description: The text presents a new approach in the realm of artificial intelligence, particularly in the use of prompt engineering techniques. Anthropic has introduced a tool called “think” that enables its Claude model to engage in complex thought processes without the requirement of external data input or alterations to its database.

Key insights from the text include:

– **Tool Overview**:
– **Name**: Think
– **Description**: Its core function is to allow the AI to contemplate a thought without seeking additional information or altering its underlying database.
– **Input Schema**: The tool requires an input that consists of a thought, specifically formatted as a string.

– **Functionality**:
– The “think” tool acts as a no-op, which means it does not perform any action beyond allowing the model to process its existing knowledge and generate additional context.
– It contrasts with other LLM tools that require external executions and data feeding.

– **Performance Improvements**:
– According to Anthropic’s benchmarks, incorporating this tool leads to significant enhancements in model performance.
– The indication is that similar methodologies could be beneficial for models developed by other AI providers.

– **Implications for AI Development**:
– This innovation showcases the potential in prompt engineering to improve the capabilities of LLMs.
– It highlights the advantage of employing such techniques for enhancing reasoning tasks within AI contexts.

This development in LLM functionality is relevant for professionals in AI security and compliance as it demonstrates an evolution in how AI tools can be designed to enhance reasoning capabilities, which could lead to more effective applications in various domains, including compliance and regulatory adherence.