Anthropic Adds 1M Context Window to Claude Opus 4.6 and Sonnet 4.6 at Standard Pricing

AI News Hub Editorial
Senior AI Reporter
March 16, 2026
Anthropic Adds 1M Context Window to Claude Opus 4.6 and Sonnet 4.6 at Standard Pricing

Anthropic has expanded access to its long-context AI capabilities by making a 1 million-token context window available for the models Claude Opus 4.6 and Claude Sonnet 4.6 at standard pricing on the Claude Platform.

The company announced that requests using the full context window will be billed at the same per-token rate as smaller prompts, removing the higher pricing that previously applied to large context usage. Opus 4.6 is priced at $5 per million input tokens and $25 per million output tokens, while Sonnet 4.6 costs $3 per million input tokens and $15 per million output tokens.

With the change, developers can process prompts approaching the full one-million-token limit without additional charges. Anthropic said a 900,000-token request will be billed at the same per-token rate as a 9,000-token request, meaning long prompts no longer carry a premium.

The company also confirmed that standard rate limits apply across the entire context window, allowing developers to maintain the same throughput regardless of prompt size.

Anthropic is expanding multimodal capacity alongside the pricing update. Each request can now include up to 600 images or PDF pages, up from the previous limit of 100. The feature is available through the Claude Platform as well as through Microsoft Azure Foundry and Google Cloud’s Vertex AI.

The rollout also simplifies access to large prompts. Requests above 200,000 tokens no longer require a beta header, and existing integrations using that header will continue to function automatically.

The company says the expanded context window allows developers to work with significantly larger datasets in a single prompt. That could include entire codebases, thousands of pages of legal documents, or the full execution trace of an AI agent, including tool calls and intermediate reasoning steps.

Anthropic emphasized that long context is only useful if the model can accurately retrieve and reason across information within that window. According to the company, Claude Opus 4.6 achieved a score of 78.3% on the MRCR v2 benchmark, which it said is the highest among frontier models operating at that context length.

The 1-million-token context window is available now through the Claude Platform and via integrations with Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

Anthropic also confirmed that Claude Code users on Max, Team, and Enterprise plans using Opus 4.6 will automatically receive access to the full 1-million-token context window during sessions, reducing the need to compress or remove earlier parts of a conversation.

This analysis is based on reporting from Claude.

Image courtesy of AlphaSignal.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: March 16, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 426Reading time: 0 minutes

AI Tools for this Article

Trending Now

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article