Simon Willison’s Weblog: Apple Is Delaying the ‘More Personalized Siri’ Apple Intelligence Features

Source URL: https://simonwillison.net/2025/Mar/8/delaying-personalized-siri/#atom-everything
Source: Simon Willison’s Weblog
Title: Apple Is Delaying the ‘More Personalized Siri’ Apple Intelligence Features

Feedly Summary: Apple Is Delaying the ‘More Personalized Siri’ Apple Intelligence Features
Apple told John Gruber (and other Apple press) this about the new “personalized" Siri:

It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.

I have a hunch that this delay might relate to security.
These new Apple Intelligence features involve Siri responding to requests to access information in applications and then perform actions on the user’s behalf.
This is the worst possible combination for prompt injection attacks! Any time an LLM-based system has access to private data, tools it can call and potentially malicious instructions (like emails and text messages from untrusted strangers) there’s a risk that an attacker might subvert those tools and use them to damage or exfiltration a user’s data.
I published this piece about the risk of prompt injection to personal digital assistants back in November 2023, and nothing has changed since then to make me think this is any less of an open problem.
Tags: apple, ai, john-gruber, llms, prompt-injection, security, apple-intelligence, generative-ai

AI Summary and Description: Yes

Summary: The text discusses Apple’s delay in rolling out personalized Siri features, emphasizing potential security concerns related to prompt injection attacks. As more AI capabilities are integrated into applications, the risk of subversive actions on user data increases.

Detailed Description: The content highlights critical aspects of security surrounding AI applications, particularly in the context of Apple’s virtual assistant, Siri. Key points include:

– Apple is planning to introduce more personalized features to Siri but is facing delays in the rollout.
– The new features are designed for Siri to access user information and perform actions on their behalf.
– There is a significant concern around security, particularly regarding prompt injection attacks, which could exploit these capabilities.
– Prompt injection occurs when malicious inputs lead AI models to execute unintended and harmful actions, especially when a system has access to sensitive data.
– The author references a previously published piece discussing these risks, indicating ongoing apprehension around the security implications of integrating AI into personal digital assistants.
– The delay may indicate that Apple is prioritizing security assessments or enhancements before delivering these advanced functionalities.

* Major risk factors highlighted:
– Access to private data.
– Ability to leverage untrusted instructions.
– Increased risk of data exfiltration or damage to user data.

Overall, the text serves as a pertinent reminder for professionals in AI and security domains to consider the broader implications of AI integration into services that handle sensitive user information. It underscores the need for rigorous security measures to mitigate the risks associated with AI features that interact with personal data.