Source URL: https://simonwillison.net/2025/Mar/14/ai-delays/#atom-everything
Source: Simon Willison’s Weblog
Title: Apple’s Siri Chief Calls AI Delays Ugly and Embarrassing, Promises Fixes
Feedly Summary: Apple’s Siri Chief Calls AI Delays Ugly and Embarrassing, Promises Fixes
Mark Gurman reports on some leaked details from internal Apple meetings concerning the delays in shipping personalized Siri. This note in particular stood out to me:
Walker said the decision to delay the features was made because of quality issues and that the company has found the technology only works properly up to two-thirds to 80% of the time. He said the group “can make more progress to get those percentages up, so that users get something they can really count on.”
I imagine it’s a lot harder to get reliable results out of small, local LLMs that run on an iPhone. Features that fail 1/3 to 1/5 of the time are unacceptable for a consumer product like this.
Via Hacker News
Tags: apple, apple-intelligence, generative-ai, ai, llms
AI Summary and Description: Yes
Summary: The text discusses delays in the rollout of personalized features for Apple’s Siri, attributing these setbacks to quality concerns around the underlying AI technology. The revelations about performance issues bring to light significant challenges in developing reliable AI applications, particularly with local LLMs on devices like the iPhone, which is particularly relevant for professionals in AI and software security.
Detailed Description:
– The article by Mark Gurman highlights internal discussions at Apple regarding the delays in launching new, personalized Siri functionalities.
– Key points include:
– The Apple team, led by Walker, acknowledged that the technology currently operates successfully only about 67% to 80% of the time.
– There is a commitment to improving this reliability so that users receive dependable performance from the Siri application.
– The conversation emphasizes the challenges of utilizing small, localized LLMs within the constraints of a mobile device, which can impact user experience significantly.
The implications for security and compliance professionals include:
– The performance reliability of AI systems is critical, especially when they are integrated into consumer products. Flaws in functionality can lead to broader security vulnerabilities or misuse of the system.
– Understanding the limitations of AI technologies enhances a professional’s ability to implement relevant security measures and quality controls, given that personal data could be mismanaged if systems are not functioning correctly.
– The industry-wide challenges highlighted in this situation may prompt a re-evaluation of AI deployment strategies, emphasizing the necessity for thorough testing and quality assurance processes.
The text serves as a reminder of the complexities involved in AI product development, which has direct implications on security measures, user privacy, and overall trust in AI applications.