Meta said Tuesday it will deploy millions of Nvidia chips across its AI data centers — including Nvidia’s standalone Grace CPUs and next-generation Vera Rubin systems — in an expanded, multiyear partnership between the two companies. Financial terms were not disclosed, though analysts said the agreement likely runs into the tens of billions of dollars. Shares of Meta and Nvidia rose in extended trading following the announcement, while AMD fell about 4%.
The deal significantly broadens a relationship that has spanned more than a decade, with Meta historically relying on Nvidia GPUs for AI workloads. This time, the agreement goes beyond graphics processors. Meta will become the first company to deploy Nvidia’s Grace CPUs as standalone chips at large scale inside its data centers, rather than pairing them exclusively with GPUs inside servers. Nvidia said this marks the first major deployment of Grace CPUs on their own.
Chip analyst Ben Bajarin of Creative Strategies said the standalone CPUs are designed to handle inference and agentic workloads, working alongside Nvidia’s Grace Blackwell and Vera Rubin systems. “Meta doing this at scale is affirmation of the soup-to-nuts strategy that Nvidia’s putting across both sets of infrastructure: CPU and GPU,” Bajarin said.
Meta plans to deploy Nvidia’s next-generation Vera CPUs beginning in 2027. The agreement also includes Nvidia’s Spectrum-X Ethernet networking switches, which connect GPUs inside large-scale AI data centers, and Nvidia security technologies that will support AI features on WhatsApp.
The chip commitment aligns with Meta’s aggressive AI spending plans. In January, the company said it expects to spend up to $135 billion on AI in 2026. Separately, Meta has pledged to invest $600 billion in the U.S. by 2028 on data centers and related infrastructure. The company is building out 30 data centers, 26 of them in the U.S., including the 1-gigawatt Prometheus site in New Albany, Ohio, and the 5-gigawatt Hyperion facility in Richland Parish, Louisiana.
Securing supply is also a key part of the strategy. Nvidia’s current Blackwell GPUs have been back-ordered for months, and its next-generation Rubin GPUs have only recently entered production. The new agreement ensures Meta access to both.
While Meta deepens ties with Nvidia, it continues to diversify its chip strategy. The company develops its own in-house silicon and uses AMD chips. In November, Nvidia shares fell after reports that Meta was considering deploying Google’s tensor processing units in its data centers beginning in 2027. AMD, meanwhile, secured a notable deal with OpenAI last year as major AI players look to reduce reliance on a single supplier amid tight supply conditions.
Engineering teams from both companies will collaborate closely to co-design and optimize AI models for Meta’s infrastructure. The social media giant is currently developing a new frontier AI model, code-named Avocado, as a successor to its Llama models. The most recent Llama release last spring drew a muted response from developers, according to prior CNBC reporting.
The expanded Nvidia partnership underscores how central compute infrastructure has become to Meta’s AI ambitions. CEO Mark Zuckerberg said the collaboration supports the company’s goal of delivering “personal superintelligence to everyone in the world,” a vision he outlined in July. With data center construction underway and chip deployments locked in years ahead, Meta is making clear that its AI strategy will be built as much on hardware scale as on software models.
This analysis is based on reporting from CNBC.
Image courtesy of Nvidia.
This article was generated with AI assistance and reviewed for accuracy and quality.