Google said it is upgrading its Gemini Deep Research capabilities with two new autonomous research agents—Deep Research and Deep Research Max—now available in public preview through the Gemini API, marking an expansion of its AI tools aimed at enterprise and developer workflows.
The update introduces a split between two configurations: Deep Research, designed for faster, lower-cost interactive use cases, and Deep Research Max, built for more intensive, asynchronous analysis using extended compute to refine outputs. Both versions are powered by the Gemini 3.1 Pro model, which Google says enables more advanced reasoning and synthesis compared to earlier iterations released in December.
The agents are designed to automate complex research processes, allowing developers to trigger multi-step workflows with a single API call. These workflows combine data from the open web with proprietary sources, producing fully cited reports intended for use across industries including finance, life sciences, and market analysis.
A key addition is the ability to connect external data through the Model Context Protocol, letting the system access specialized datasets such as financial or market intelligence feeds. The platform also now supports multimodal inputs—including documents, images, audio, and video—and can generate charts and infographics directly within reports.
Google has added more visibility into how the system operates, including tools for users to review and adjust research plans before execution, as well as real-time streaming of intermediate reasoning steps. Developers can also configure which data sources the agent can access, including the option to limit research to private datasets.
The company said Deep Research Max improves on earlier versions by consulting a broader range of sources and better handling conflicting information, aiming to produce more structured and detailed outputs. Google is testing the system with partners in regulated sectors, including collaborations with FactSet, S&P, and PitchBook to integrate financial data into research workflows.
The release builds on infrastructure already used across Google products such as Search, NotebookLM, and the Gemini app, and signals a push to position autonomous research as a core capability within its AI platform. Google said both Deep Research and Deep Research Max will also be made available to startups and enterprises through Google Cloud.
About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.
Word count: 384Reading time: 0 minutes
Explore More AI Resources
Continue with high-value guides related to this topic.