Anthropic has introduced a new memory feature for its Claude AI assistant, rolling it out to users on the Team and Enterprise plans. The update allows Claude to “remember” past interactions—things like project context, user preferences, and ongoing tasks—without prompting, helping users avoid repeating background info or losing track of longer workflows. Memory is optional, with controls allowing users to view, edit, or disable what Claude retains, and it’s organized per project to keep workstreams distinct. Alongside this, Anthropic is now offering an Incognito chat mode available to all users (including free-tier ones), where conversations are neither saved into memory nor shown in history, providing a privacy-oriented option.
Sources: TechRadar, Anthropic, Venturebeat
Key Takeaways
– The memory feature is exclusive for Team and Enterprise Claude users, enabling persistent context across chats; free and lower-tier users do not have that feature yet.
– All users can use the new Incognito mode that prevents chat history and memory use; this gives people more privacy and control over when their data is stored.
– Anthropic has built in transparency and management tools: users can inspect memory summaries, adjust what the system focuses on or ignores, disable memory entirely, or segment memory by project to prevent mixing of sensitive or unrelated data.
In-Depth
Anthropic’s latest Claude AI update marks an important step forward in how conversational agents manage continuity and privacy. For users on the Team and Enterprise subscription tiers, Claude’s new memory feature removes the need to re-establish context across separate chats. It can remember specifics like project tasks, user preferences, client-related details, or organizational processes, enabling smoother collaboration especially in work environments. Memory is compartmentalized per project to avoid accidental mixing of contexts—so what Claude learns in one project stays largely separate from another. Users have granular control: memory can be toggled on or off, specific details can be edited or removed, and administrators can disable memory altogether in organizational settings.
On the privacy front, Anthropic has also made a big move by giving every Claude user access to an Incognito mode. In this mode conversations do not get saved into memory or chat history and won’t feed into Claude’s understanding of you in future sessions. It’s a kind of “blank slate” option that helps when discussing sensitive topics or when a fresh, non-biased conversation is needed. However, “not saved” doesn’t always mean “immediately erased”; there may still be a minimum retention period for safety, compliance, or legal reasons, though such chats won’t influence Claude’s memory or appear in a user’s history.
These updates reflect growing pressure in the AI space to balance useful continuity with user control and privacy. While competitors have already rolled out memory-like features, Anthropic is emphasizing user agency and transparency in its implementation. The result: users on paid plans get more power to build cumulative workflows and carry context forward, while the broader user base gains more options to opt out and safeguard personal or sensitive interactions. As AI moves deeper into work settings, these kinds of controls could become a defining element in how trustworthy and adopted systems become.

