Anthropic Introduces AI Code Review Feature in Claude Code for Developers

AI News Hub Editorial
Senior AI Reporter
March 9, 2026
Anthropic Introduces AI Code Review Feature in Claude Code for Developers

Anthropic on Monday introduced Code Review in Claude Code, a new automated system designed to analyze pull requests and identify potential bugs before human engineers review them, as companies grapple with a surge in code produced by AI-assisted development tools.

The feature, now available through the Claude Code web interface for Teams and Enterprise customers, can be enabled by administrators for specific repositories. Once activated, the system runs in the cloud whenever a pull request is opened, examining proposed changes and posting comments with detected issues and suggested fixes.

Anthropic says the tool is aimed at addressing a growing review bottleneck created by AI coding assistants that allow developers to generate far more code than before. “As people adopt Claude Code, we’ve been noticing that people are writing a lot more PRs than they used to,” said Cat Wu, head of product for Claude Code at Anthropic. “What that often means is now the burden is shifted onto the code reviewer because it only takes one engineer, one prompt, to put out a plausible-looking PR.”

Code Review uses multiple AI agents operating in parallel, each examining the code for different categories of problems. Their findings are aggregated into a single report posted directly on the pull request, where developers can review flagged issues and recommended changes. The system does not approve pull requests automatically; a human reviewer must still make the final decision.

Anthropic designed the tool to concentrate primarily on logical errors and actual bugs, rather than formatting or style suggestions. Wu said that approach helps limit unnecessary alerts that can frustrate developers using automated review tools. “People are very sensitive to false positives,” she said. “If we just focus on logic errors and we just focus on actual bugs in the code, then the false positive rate is low.”

Inside Anthropic, a similar system is already used across most pull requests. The company says that before introducing automated review internally, 16% of pull requests received substantive comments. With the system in place, that figure rose to 54%. For large pull requests with more than 1,000 lines changed, the system finds bugs in 84% of cases, identifying an average of 7.5 issues. Developers mark fewer than 1% of flagged problems as incorrect, according to the company.

The system can take time to run. Anthropic says the average review takes about 20 minutes, with the duration increasing for larger or more complex code changes. Instead of limiting analysis to modified files, the agents can examine the broader codebase to identify interactions that could introduce bugs elsewhere.

Pricing follows a token-based model typical of AI services. Anthropic estimates each review costs between $15 and $25 on average, though the total depends on the size and complexity of the code being analyzed. Administrators can set monthly spending limits and monitor usage through an analytics dashboard.

The tool integrates with GitHub, automatically analyzing pull requests and posting explanations of detected issues. The system labels severity levels using colors to help developers prioritize fixes and can include a lightweight security review. Anthropic’s separate Claude Code Security product performs deeper security analysis across an entire codebase.

Wu said demand for automated reviews has grown rapidly among large enterprise users of Claude Code, including companies such as Uber, Salesforce, and Accenture. “We’ve seen a lot of growth in Claude Code, especially within the enterprise,” she said, noting that customers increasingly want tools that can manage the volume of pull requests generated by AI-assisted coding.

Anthropic is also exploring ways to run Code Review locally within developers’ workflows. Wu said interest in that capability has been strong, suggesting engineers want the tool to validate changes before submitting them for formal review.

The launch comes as Anthropic’s enterprise business expands quickly. The company says subscriptions have quadrupled since the start of the year, and Claude Code has surpassed a $2.5 billion revenue run rate since its release.

This analysis is based on reporting from The New Stack.

Image courtesy of Anthropic.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: March 9, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 676Reading time: 0 minutes

AI Tools for this Article

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article