Tomasz Tunguz: How AI Tools Differ from Human Tools

Source URL: https://www.tomtunguz.com/tools-evolution/
Source: Tomasz Tunguz
Title: How AI Tools Differ from Human Tools

Feedly Summary:

Now that we’ve compressed nearly all human knowledge into large language models, the next frontier is tool calling. Chaining together different AI tools enables automation. The shift from thinking to doing represents the real breakthrough in AI utility.
I’ve built more than 100 tools for myself, & they work most of the time, but not all the time. I’m not alone. Anthropic’s Economic Index report reveals that 77% of business use of Claude centers on full-task automation, not co-piloting.
Anthropic published documentation last week about token efficiency & re-architecting tools to optimize their use. The guidance was counterintuitive : instead of many simple tools with clear labels, create fewer, more complex tools.
Here are the seven email tools I built – Ruby scripts, each with a clear purpose. The “Safe Send Email” script was designed to prevent the AI from sending emails without approval.
draft_email.rb
send_email.rb
forward_email.rb
find_and_draft_reply.rb
read_email.rb
archive_emails.rb
safe_send_email.rb
Beautifully naive, simple, & clear, Shouldn’t a language model be able to read these & know exactly what I was asking it to do? But it’s not this simple!
Anthropic recommends creating complex tools. Their research shows that “requests save an average of 14% in output tokens, up to 70%” when using sophisticated, parameter-rich tools instead of simple ones. The reason? AI systems understand full context better than fragmented intent.
I spent the weekend consolidating all my tools into unified tools, like this one for email:
ruby unified_email_tool.rb \
–action send \
–to "john@company.com" \
–subject "Q4 Strategy Review" \
–body "…" \
–cc "team@company.com" \
–format concise
The impact on accuracy was immediate. Claude’s success rate approaches 100%. The system is faster. As a result, I’m using far fewer tokens with a more efficient system.
Here’s my current mental model:

People Need
AI Systems Need

Cognitive chunking
Complete context

Progressive disclosure
Parameter-rich interfaces

When I redesigned for AI cognition rather than human intuition, everything improved. My CRM operations, calendar management, & database workflows all became more reliable when consolidated into comprehensive, parameter-heavy tools. Accuracy improved, so the total cost was reduced significantly.
But don’t ask me to use the tools. I’m now a bit lost amidst the complexity. This is an inevitable corollary of working at higher levels of abstraction, no longer deeply understanding the machine.
We spent decades making software simple for people. Now we’re learning to make it complex for AI.

AI Summary and Description: Yes

Summary: The text discusses the evolution of AI from task automation to tool calling, emphasizing the importance of creating complex, parameter-rich tools to enhance the efficiency and performance of large language models (LLMs). The author’s personal experience in developing these tools illustrates a shift towards designing for AI cognition rather than human intuition, thereby improving accuracy and reducing costs in various operational workflows.

Detailed Description:
The content details a significant shift in the approach to utilizing AI tools, particularly in the context of large language models (LLMs) and automation. The following points encapsulate the major elements of the text:

– **Tool Calling**: The transformation from merely processing information (thinking) to executing tasks (doing) indicates a new frontier in AI utility, highlighting the capability of chaining various AI tools for enhanced automation.

– **Automation Insights**: A notable statistic from Anthropic’s Economic Index report shows that 77% of business applications of their AI model, Claude, focus on full-task automation. This indicates a growing reliance on AI for complete processes rather than just supportive roles (co-piloting).

– **Complex vs. Simple Tools**: Anthropic’s recent guidance challenges the notion that simpler, clearly labeled tools are always better. Instead, they advocate for fewer, more complex tools that can handle sophisticated tasks. This aligns with their findings that more intricate tools can lead to significant reductions in output tokens used by AI (14% to 70% savings).

– **Personal Tool Development**: The author shares their experience in developing multiple tools with specific functions, like email handling scripts, and notes the inefficiency of asking AI to interpret fragmented commands.

– **Unified Tool Design**: By consolidating functions into a single, parameter-rich tool (like the unified email tool example), the author observed dramatic improvements in the AI’s performance, achieving a near-perfect success rate.

– **Mental Model Shift**: The text points out a contrast in design mentality:
– **Human Needs**: Cognitive chunking and progressive disclosure for clarity.
– **AI Needs**: Comprehensive context and parameter-heavy interfaces for enhanced understanding.

– **Operational Impact**: Implementing more complex tools led to reliable outcomes in various areas including CRM operations, calendar management, and database workflows, ultimately resulting in reduced total operational costs.

– **Complexity Trade-off**: The author acknowledges a downside; while the tools become more efficient, they also raise the complexity level, leading to a personal feeling of disconnection with the underlying mechanics of the system.

In conclusion, this text serves as a critical insight for security and compliance professionals in understanding the nuanced dynamics of AI tool development, the importance of architectural decisions on system performance, and the broader implications for operational reliability and efficiency in business settings. It underscores a pivotal shift in AI operations that may require reevaluation of traditional software design principles in the context of advanced AI capabilities.