Source URL: https://simonwillison.net/2025/Jun/18/context-rot/#atom-everything
Source: Simon Willison’s Weblog
Title: Quoting Workaccount2 on Hacker News
Feedly Summary: They poison their own context. Maybe you can call it context rot, where as context grows and especially if it grows with lots of distractions and dead ends, the output quality falls off rapidly. Even with good context the rot will start to become apparent around 100k tokens (with Gemini 2.5).
They really need to figure out a way to delete or “forget" prior context, so the user or even the model can go back and prune poisonous tokens.
Right now I work around it by regularly making summaries of instances, and then spinning up a new instance with fresh context and feed in the summary of the previous instance.
— Workaccount2 on Hacker News, coining "context rot"
Tags: long-context, llms, ai, generative-ai
AI Summary and Description: Yes
Summary: The text introduces the concept of “context rot,” a phenomenon affecting the output quality of AI models as the context length increases. It underscores the need for mechanisms to manage or prune context to maintain effectiveness, particularly for applications using large language models (LLMs).
Detailed Description: The content addresses important issues regarding the management of context in AI systems, particularly LLMs. The concept of “context rot” suggests that as the context grows—particularly beyond a threshold (like 100k tokens)—the relevance and quality of the AI outputs can deteriorate. This poses challenges for developers and users of AI systems.
Key Points:
– **Context Rot**: Describes how accumulated distractions and unnecessary information in the input context can degrade the output quality of generative AI models.
– **Token Limit**: Points to a specific limit (100k tokens) where this degradation becomes evident, particularly in models like Gemini 2.5.
– **Proposed Solution**: Advocates for mechanisms to delete or “forget” prior context to avoid the accumulation of distracting tokens.
– **Current Workaround**: The author mentions a method of periodically summarizing input instances and creating new instances to maintain clean context, enhancing output effectiveness.
Implications for AI and Security Professionals:
– Understanding context rot is essential for improving the reliability and performance of AI systems.
– Strategies for managing context effectively can contribute to better generative outputs, which is crucial for applications in cybersecurity where accuracy and relevance are paramount.
– This insight might lead to discussions around compliance and data governance, particularly in ensuring that only relevant and actionable information informs AI models.
Moreover, this context-management issue highlights a broader conversation around the governance of AI outputs, tying into regulations and best practices that ensure responsible AI usage.