Simon Willison’s Weblog: claude-trace

Source URL: https://simonwillison.net/2025/Jun/2/claude-trace/
Source: Simon Willison’s Weblog
Title: claude-trace

Feedly Summary: claude-trace
I’ve been thinking for a while it would be interesting to run some kind of HTTP proxy against the Claude Code CLI app and take a peek at how it works.
Mario Zechner just published a really nice version of that. It works by monkey-patching global.fetch and the Node HTTP library and then running Claude Code using Node with an extra –require interceptor-loader.js option to inject the patches.
Provided you have Claude Code installed and configured already, an easy way to run it is via npx like this:
npx @mariozechner/claude-trace –include-all-requests

I tried it just now and it logs request/response pairs to a .claude-trace folder, as both jsonl files and HTML.
The HTML interface is really nice. Here’s an example trace – I started everything running in my llm checkout and asked Claude to “tell me about this software" and then "Use your agent tool to figure out where the code for storing API keys lives".

I specifically requested the "agent" tool here because I noticed in the tool definitions a tool called dispatch_agent with this tool definition (emphasis mine):

Launch a new agent that has access to the following tools: GlobTool, GrepTool, LS, View, ReadNotebook. When you are searching for a keyword or file and are not confident that you will find the right match on the first try, use the Agent tool to perform the search for you. For example:

If you are searching for a keyword like "config" or "logger", the Agent tool is appropriate
If you want to read a specific file path, use the View or GlobTool tool instead of the Agent tool, to find the match more quickly
If you are searching for a specific class definition like "class Foo", use the GlobTool tool instead, to find the match more quickly

Usage notes:

Launch multiple agents concurrently whenever possible, to maximize performance; to do that, use a single message with multiple tool uses
When the agent is done, it will return a single message back to you. The result returned by the agent is not visible to the user. To show the user the result, you should send a text message back to the user with a concise summary of the result.
Each agent invocation is stateless. You will not be able to send additional messages to the agent, nor will the agent be able to communicate with you outside of its final report. Therefore, your prompt should contain a highly detailed task description for the agent to perform autonomously and you should specify exactly what information the agent should return back to you in its final and only message to you.
The agent’s outputs should generally be trusted
IMPORTANT: The agent can not use Bash, Replace, Edit, NotebookEditCell, so can not modify files. If you want to use these tools, use them directly instead of going through the agent.

I’d heard that Claude Code uses the LLMs-calling-other-LLMs pattern – one of the reason it can burn through tokens so fast! It was interesting to see how this works under the hood – it’s a tool call which is designed to be used concurrently (by triggering multiple tool uses at once).
Anthropic have deliberately chosen not to publish any of the prompts used by Claude Code. As with other hidden system prompts, the prompts themselves mainly act as a missing manual for understanding exactly what these tools can do for you and how they work.
Via @badlogicgames
Tags: anthropic, claude, ai-agents, ai, llms, prompt-engineering, ai-assisted-programming, generative-ai

AI Summary and Description: Yes

Summary: The text provides insights into the functionality of Claude Code, particularly focusing on a tool to track HTTP requests and responses during its operation. The discussion of the Agent tool and its invocation underscores its potential for automated programming tasks, relevant for AI and software development professionals.

Detailed Description:
The text delves into the functioning of Claude Code, revealing technical aspects that can be quite valuable for professionals working in AI systems and software development. It discusses a specific tool developed by Mario Zechner that allows users to monitor how Claude Code interacts with APIs. This is significant for security and compliance professionals as it highlights the handling of requests, responses, and system prompts—elements that can be critical in maintaining security and integrity within AI systems.

Key Points:
– **HTTP Proxy Tool**: The tool from Mario Zechner enables monitoring of request/response pairs by monkey-patching the Node HTTP library.
– **Usage**: Installation via npx simplifies running the tool, which creates logs in both JSONL and HTML formats, making it accessible for analysis.
– **Agent Tool Functionality**: The text illuminates how the Agent tool operates within Claude Code:
– It can launch multiple agent instances to enhance performance.
– Each agent interacts statelessly, meaning task descriptions must be thorough to yield useful outputs.
– The outputs are not always directly visible to users, requiring well-defined prompts for obtaining concise results.
– **Token Management**: The discussion of the LLMs-calling-other-LLMs pattern indicates how Claude Code manages high token usage, which may impact performance monitoring and cost implications for AI services.
– **Hidden Prompts**: The lack of published prompts for Claude Code exemplifies a gap in transparency that could have implications for trust and security in AI operations.

Practical Implications:
– **For Security & Compliance**: Understanding the usage and limitations of tools like the Agent in AI frameworks is crucial for ensuring that these systems operate within defined security and compliance boundaries.
– **For Software Development**: Insights into tracking operations in AI applications can help developers debug and optimize performance while being mindful of the potential security risks involved in automated interactions.
– **Regulatory Considerations**: The discussion around non-disclosed prompts touches on governance and transparency issues that must be navigated in compliance-heavy environments.

Overall, the text highlights the intricacies of using AI tools and the importance of monitoring and governance that professionals in AI security and compliance must prioritize.