OpenAI partners with AWS in $38 billion GPU deal as Microsoft strikes multibillion agreements with neocloud vendors
San Francisco — In a sweeping week of artificial intelligence (AI) infrastructure deals, OpenAI and Amazon Web Services (AWS) have announced a $38 billion multi-year partnership, granting the ChatGPT maker access to hundreds of thousands of Nvidia GPUs and Amazon EC2 Ultra Servers. The collaboration aims to bolster OpenAI’s computing capacity for model training and inference, underscoring the industry’s growing race for GPU dominance.
AWS said the integration of Nvidia GPUs and Ultra Server architecture would enable OpenAI to train the next generation of generative AI models while optimizing performance for ChatGPT and other services.
The agreement marks the latest in a series of expansive partnerships for OpenAI following its recent revised terms with Microsoft, which remains its largest stakeholder with a 27% share. Under the updated arrangement, OpenAI can pursue alliances with other technology providers — and it has wasted no time doing so.
In recent months, the company has forged multibillion-dollar deals with Oracle and Nvidia, and now AWS, signaling a deliberate multicloud strategy. Meanwhile, Meta also entered a $14.2 billion agreement for AI compute with neocloud vendor CoreWeave, reflecting the rapid diversification across the AI infrastructure market.
While OpenAI is broadening its partnerships, Microsoft has been expanding its own access to GPU capacity through new deals. On Monday, Iren announced a $9.7 billion, multiyear GPU cloud services agreement with Microsoft, giving the company access to Nvidia GB300 GPUs over five years. Similarly, Superintelligence Cloud vendor Lambda revealed a multibillion-dollar partnership with Microsoft to deploy AI infrastructure powered by tens of thousands of Nvidia GPUs, including the GB300 NVL72 models.
Industry analysts say these moves highlight a fundamental shift as hyperscalers and AI model developers seek to diversify their compute resources.
“They’re hedging their bets and spreading the chips around the table,” said Mark Beccue, an analyst at Omdia, a division of Informa TechTarget. “A little bit here, a little there. Not a little bit. A lot here, a lot there.”
For OpenAI, the AWS deal reinforces its commitment to a multicloud approach that began long before revising its Microsoft terms — including a $300 billion multiyear deal with Oracle announced in September.
“This is validation that no single cloud provider can deliver the compute OpenAI needs,” said Nick Patience, an analyst at the Futurum Group.
AWS, in turn, benefits by securing another high-profile generative AI customer beyond Anthropic, while showcasing its Nitro System and other fabric technologies.
“By improving Nitro and its hypervisor platform, AWS can increase its benefit from this type of infrastructure deal,” said Torsten Volk, an analyst at Omdia. “As venture capitalists aren’t looking kindly upon large Capex projects, OpenAI really has no choice but to strike these types of partnerships all across the industry.”
Microsoft’s partnerships with Iren and Lambda may also serve multiple purposes — from meeting its own AI compute needs to fulfilling commitments to OpenAI.
“Microsoft has not been at the forefront of model training,” said Patience. “It’s certainly done some, but it’s not out there like OpenAI, Anthropic, or Mistral.”
Despite the strategic jockeying, one constant remains: Nvidia’s dominance.
“Both deals focus on access to massive numbers of GPUs, making Nvidia the designated winner — without even being directly involved in the transactions,” Volk said.
The agreements reinforce the central role of GPUs in AI development and deployment.
“All in all, striking these deals is without an alternative for both the hyperscalers and the AI companies,” Volk added. “Investors and shareholders will not be forgiving to any of them missing out on the big AI gold rush.”
Still, OpenAI faces mounting questions about profitability amid its rapid expansion.
“What’s the longevity of an OpenAI based on their business models?” Beccue asked. “I’m still curious as to when they actually become revenue positive.”
As AI infrastructure partnerships multiply across hyperscalers, neocloud vendors, and model developers, the landscape is fast evolving into one where collaboration — not exclusivity — defines the race for generative AI leadership.