Three Pillars of the Program
Google Cloud CEO Thomas Kurian outlined three core components of the initiative. The first is go-to-market support: participating startups can distribute their AI agents through Gemini Enterprise and Google Cloud Marketplace, giving them immediate access to the global enterprise customer base that Google has spent years building.
The second is development credits — substantial cloud resources for teams that are training and scaling AI agents on Google's infrastructure. For early-stage companies, compute costs are often the binding constraint on how fast they can iterate. Removing that cost removes a meaningful barrier to experimentation.
The third is engineering support: direct access to experts from Google DeepMind and Google Cloud. For a startup building on Gemini models, getting hands-on input from the team that built the underlying infrastructure is a different order of support than reading documentation.
A Competition and a Cohort
Alongside the fund, Google launched the "AI Agents Challenge," a global competition running through June 5 that offers $90,000 in prizes for teams building autonomous systems using Gemini Enterprise. Each participating team receives $500 in cloud credits on entry. Selected founders will then gather at Google's Mountain View campus in June for the Gemini Startup Forum, a two-day program with direct access to Google executives focused on scaling. Additional sector-specific events include a cybersecurity-focused forum in London and a dedicated AI and cybersecurity summit later in the year.
New Hardware to Match the Ambition
The fund announcement arrived alongside a significant hardware reveal. Google unveiled its eighth-generation tensor processing units — two separate chips designed from the ground up for different workloads, which the company says is a first. The TPU 8T is optimized for training, connecting up to 9,600 chips in a single cluster and delivering roughly three times the processing power and four times faster data transfer than its predecessor. The TPU 8i is built for inference (real-time responsiveness) and offers up to ten times greater compute performance and seven times the memory capacity of the previous generation.
Google's AI and Infrastructure CTO Amin Vahdat framed the chip advances in terms of research compression: processes that once took a decade could now take as little as a year, particularly in drug discovery and scientific research. Whether that timeline proves accurate, the architectural decision to separate training and inference into distinct hardware reflects a maturing view of where AI infrastructure is heading.
What Google Is Really Buying
The $750 million fund is not purely philanthropic. Every startup that builds on Gemini and distributes through Google Cloud Marketplace deepens Google's position as the default enterprise AI platform. Amazon has AWS Marketplace. Microsoft has Azure and a deep OpenAI integration. Google has been the third player in that race, and this fund is a direct attempt to close the gap by locking in the next generation of AI agent builders before they default to a competitor's infrastructure.
For startups in the AI agent space, the program offers real and specific value: distribution, compute, and engineering support at a stage when all three are genuinely hard to get. The trade-off is building inside Google's ecosystem — which, depending on your business model and customer base, may be exactly where you want to be, or a constraint worth thinking carefully about before applying.
This analysis is based on reporting from Daily Sabah and Google Cloud.
This article was generated with AI assistance and reviewed for accuracy and quality.