Source URL: https://simonwillison.net/2025/Jul/12/grok-4-heavy/#atom-everything
Source: Simon Willison’s Weblog
Title: Grok 4 Heavy won’t reveal its system prompt
Feedly Summary: Grok 4 Heavy won’t reveal its system prompt
Grok 4 Heavy is the “think much harder" version of Grok 4 that’s currenly only available on their $300/month plan. Jeremy Howard relays a report from a Grok 4 Heavy user who wishes to remain anonymous: it turns out that Heavy, unlike regular Grok 4, has measures in place to prevent it from sharing its system prompt:
Sometimes it will start to spit out parts of the prompt before some other mechanism kicks in to prevent it from continuing.
This is notable because Grok have previously indicated that system prompt transparency is a desirable trait of their models, including in this now deleted tweet from Grok’s Igor Babuschkin (screenshot captured by Jeremy):
Given the past week of mishaps I think xAI would be wise to reaffirm their dedication to prompt transparency and set things up so the xai-org/grok-prompts repository updates automatically when new prompts are deployed – their current manual process for that is clearly not adequate for the job!
Tags: ai, generative-ai, llms, grok, ai-ethics
AI Summary and Description: Yes
Summary: The text discusses the new Grok 4 Heavy model, highlighting its approach to system prompt transparency and the implications for the AI ethics conversation. The focus on withholding the system prompt contrasts with previous transparency commitments, raising questions about governance and accountability in AI systems.
Detailed Description: This text delves into the features and ethical considerations surrounding the Grok 4 Heavy model, which adheres to a subscription plan. The recent changes regarding system prompt transparency signal a noteworthy shift in AI ethics, particularly for developers, security, and compliance professionals.
– Grok 4 Heavy is marketed as a more advanced version of Grok 4 available through a high-tier subscription.
– Unlike its predecessor, Grok 4 Heavy has implemented mechanisms that actively prevent it from disclosing its system prompt, which is a critical aspect of understanding AI behavior and safety.
– An anonymous user report indicates that while Grok 4 Heavy sometimes attempts to communicate parts of the prompt, a built-in safeguard curtails this process.
– This change marks a departure from Grok’s earlier stance promoting system prompt transparency. In a prior communication, Grok’s representatives acknowledged transparency as favorable, particularly in light of ethical AI discussions.
– Jeremy Howard’s commentary suggests the need for Grok to reaffirm its commitment to prompt transparency, especially after recent operational challenges, including a recommendation to automate the prompt updates in their repository to avoid manual errors.
The implications for security and compliance professionals include:
– An increased focus on governance models governing transparency within AI systems.
– The necessity to evaluate how prompt management and control mechanisms can impact the broader understanding of AI behavior and accountability.
– An encouragement for companies to reflect on their ethical commitments in response to user trust and regulatory scrutiny concerning AI outputs and functionalities.