What the Platform Does
Beyond memory portability, Anuma includes several features built around the multi-model premise. Council Mode runs the same prompt through several models simultaneously — ChatGPT, Claude, Gemini, and DeepSeek can all respond to the same question in a single view, letting users compare answers for tone, accuracy, and depth without switching tabs or maintaining separate accounts.
The platform also supports AI access by SMS and iMessage: users can text their AI assistant without opening an app, with full memory context carried over. Creative Studio adds image, video, and audio generation within the same interface. AI agents for real-world task completion, such as billing disputes, lease reviews, and insurance claims, are listed as coming soon.
Pricing runs in three tiers. The free plan includes 100 credits per month and access to every major feature, with no credit card required. Starter is $9.99 per month and Pro is $19.99 per month. The platform is available on the web now, with iOS and Android apps in development.
The Privacy Claim and Its Limits
Anuma's privacy positioning deserves some precision. The memory vault itself is encrypted on the user's device, and Anuma says it cannot access the contents. When a user sends a message using a closed-source model like ChatGPT or Claude, however, that message and whatever context Anuma passes from memory are transmitted to OpenAI or Anthropic's servers respectively. Anuma controls how much of the memory vault is shared per session, but the underlying model providers still receive and process the conversation under their own data policies.
For open-source models routed through Anuma, the company says there is zero data retention and no training on prompts. Users can choose which type of model to use based on the sensitivity of a given task.
The Broader Context
Anuma is entering a market where the major AI labs (e.g. OpenAI, Anthropic, Google) are actively building out their own memory systems and pushing users to stay within their ecosystems. The counter-argument Anuma is making is that as AI becomes more deeply embedded in daily work, the inability to carry context freely across tools becomes a meaningful limitation. A unified memory layer owned by the user, rather than by any single vendor, is the structural alternative it is offering.
Whether that argument lands at scale will depend on how much users value model portability versus the convenience and depth of first-party AI experiences from the major labs. The 10,000-person beta and no-credit-card free tier suggest Anuma is betting that friction-free access is enough to build an initial audience while it develops the more advanced agent capabilities on its roadmap.
This article is based on the official Anuma launch announcement and the Anuma product site.
Image courtesy of Towfiqu barbhuiya and Unsplash.
This article was generated with AI assistance and reviewed for accuracy and quality.