Apple is internally deploying a prototype “ChatGPT-like” iPhone app to help engineers test and refine a radically reimagined version of Siri, according to Bloomberg. The experimental tool—reportedly not intended for public release—lets Apple simulate conversational experiences, carry multiple threads of dialogue, reference past chat context, and orchestrate more fluid interaction across apps. MacRumors describes how the assistant is being rebuilt on a second-generation architecture under Apple Intelligence, aiming for a rollout in early 2026 (via an iOS 26.4 update). 9to5Mac notes that Apple is evaluating whether to power parts of this new Siri with in-house models or external systems like OpenAI or Anthropic, balancing the need for advanced intelligence with Apple’s privacy and integration standards.
Sources: Mac Rumors, 9to5 Mac
Key Takeaways
– Apple’s experimental chatbot app is strictly for internal testing; there’s no current plan to release it publicly.
– The goal is to test deeper conversational memory, context awareness, and cross-app capabilities for the next Siri.
– Apple is weighing a hybrid approach: using both its own AI models and possibly outsourcing parts of Siri’s intelligence to third parties like OpenAI or Anthropic.
In-Depth
Apple’s latest move signals a turning point in how the company approaches its flagship voice assistant, Siri. Rather than a gradual upgrade, Apple appears to be rebuilding from the ground up—with a stealth internal app (sometimes referred to in reports as “Veritas”) serving as its sandbox. The idea is simple: give engineers a conversational interface similar to ChatGPT so they can stress test, refine, and tune the future of Siri in a controlled environment.
This internal chatbot mimics many expected features of the next Siri. It supports multiple threads of conversation by topic, remembers previous exchanges, and can “jump” across tasks in different apps. It’s precisely the kind of continuity and contextual capability that existing digital assistants often struggle with. Apple wants Siri to evolve beyond mere command execution and become more of a conversational partner that can reason about what you just asked, what you might ask next, and how to act intelligently across apps and domains.
The timeline appears aggressive: current reporting suggests Apple is targeting early 2026 (possibly via iOS 26.4) to begin rolling out this LLM-based Siri to consumers. But as we’ve seen with Apple’s past AI ambitions, delays and quality demands are real constraints. Apple originally had planned to ship parts of its “Apple Intelligence” vision earlier, but pulled back in order to meet its own standards. The internal app is likely part of that recalibration: rather than push half-baked AI features, Apple seems intent on more rigorous internal vetting before public release.
Another interesting dimension is Apple’s openness to external AI partnerships. While the company has long championed in-house development, recent reports indicate serious discussions with OpenAI, Anthropic, and even Google about providing or augmenting parts of Siri’s backbone. This signals a more pragmatic posture: Apple knows the pace of AI development is fierce, and it may not be able to go it entirely alone. The challenge will be doing so in a way that preserves user privacy, performance on device, and seamless integration across Apple’s ecosystem.
Whatever the final form of Siri 2.0 may look like, Apple is clearly stepping up its efforts. But between ambition, technical risk, and internal quality control, the path to a truly next-generation Siri remains complex and uncertain.

