Apple is reportedly conducting internal testing on a new AI chatbot named Veritas to upgrade Siri’s intelligence and contextual understanding. Employees are using it to experiment with features like searching personal data (emails, photos) and executing in-app actions, while Apple simultaneously weighs integrating external AI models. According to Bloomberg’s Mark Gurman, Veritas will remain private for now, and Apple may lean on Google’s Gemini model to power the AI-driven search in the next Siri overhaul.
Key Takeaways
– Veritas is an in-house chatbot prototype allowing Apple to trial next-gen Siri functions like context tracking, multimedia summarization, and in-app interactions.
– Apple appears open to outsourcing parts of its AI stack — especially for web search and summarization — by leaning on Google’s Gemini rather than developing everything in-house.
– The decision to keep Veritas internal suggests Apple is being cautious about deploying AI widely until reliability, privacy, and user experience are more assured.
In-Depth
Apple has long been perceived as trailing rivals such as Google and OpenAI in the generative AI space. Siri, once a revolutionary voice assistant, has struggled to keep pace with more agile alternatives that provide conversational and context-aware responses. Now, according to reports, Apple is taking a novel but cautious path forward through its internal chatbot initiative code-named Veritas.
Veritas is described as a ChatGPT-style application used exclusively by employees. In this sandbox, Apple is experimenting with ways that Siri could evolve: letting the assistant search a user’s private content like emails and photos, write and rewrite content, and even execute commands across apps (e.g., editing a photo from within the assistant). Conversations with Veritas can branch and revisit past topics, testing persistence of context and coherence. The project is effectively a controlled environment for Apple to iterate quickly on complex features without exposing users to early errors.
Interestingly, Apple reportedly has no near-term plan to release Veritas to the public. The cautious posture hints that Apple wants to solve basic reliability, privacy, and hallucination problems before pushing this kind of AI forward at scale. In parallel, Apple is openly exploring partnerships with external AI providers. One of the leading candidates is Google’s Gemini, which Apple may employ to power its web search summarization and “world knowledge” capabilities in Siri. This strategy contrasts with Apple’s earlier insistence on owning its entire stack. It suggests a recognition of the enormous scale, computing, and research investments required to compete at the top level of AI.
However, integrating a model like Gemini carries risks. If Apple depends too heavily on external models, it could compromise its position over time. Plus, large language models are known to hallucinate or confidently state falsehoods; users’ trust could erode quickly if Siri responds with incorrect or misleading information. Apple must balance innovation with strong guardrails and user transparency. Veritas may represent the testing ground in which Apple clamps down on these issues before a full consumer rollout.
In any case, Veritas signals Apple’s intent to reimagine Siri beyond simple voice commands toward a more conversational, multimodal AI assistant. How well Apple executes — and whether it maintains control over its AI identity while incorporating external tech — will be crucial for its credibility in the evolving AI landscape.

