News

Company dynamics Notice notice Product information
Altman: ‘Everybody Underestimates the Need for AI Compute’

Altman is painfully aware of the gap between how much compute his vision of the AI-driven future will require, and what will be available. OpenAI runs large language model (LLM) training and inference in Microsoft’s Azure cloud. These models are so large that training them requires warehouses of state-of-the-art computers, takes months and costs millions of dollars.
                                                                                   Sam Altman (right) ><br/>                <span style=Sam Altman (right) on stage with Pat Gelsinger at Intel Foundry’s Direct Connect event. (Source: Nitin Dahad/EE Times)

Altman is likely also painfully aware that more compute depends on more chips. This market is dominated by Nvidia’s H100 GPU, which is in notoriously short supply due to a combination of unprecedented demand and limitations on foundry and packaging capacity. More AI chips of any flavor (GPU or otherwise) all ultimately depend on the world’s available leading-edge foundry capacity.

Can Altman really be planning to significantly influence the world’s leading-edge foundry capacity from three or four steps up the value chain? Never say never.

While it seems unlikely OpenAI could become an IDM or start a foundry business itself, as this is so far removed from the company’s activities today, Altman is probably hoping that hyping up AI’s future will make a start on convincing the industry that more foundry capacity will be needed, so they’d better start building. Note that there was no suggestion at Intel’s event that OpenAI and Intel Foundry are working together in any capacity, other than Altman’s appearing on stage with Gelsinger.

Building more foundries, or more fabs, will be expensive and take years. Rapidus, in Japan, is starting from scratch and will reportedly require $54 billion of investment to open two leading-edge fabs by 2027. But $7 trillion?

“The kernel of truth [in the $7 trillion report] is that we do think investing a lot of money in AI compute, energy and data centers is going to be important to deliver the amount of services people want and the tools we are all going to get a huge amount of value out of, to help create better futures,” Altman said.

Altman added that training and running AI at the scale he has in mind will require significant investment in the entire infrastructure stack around the world—not just chips.

Altman’s lofty ambitions for AI may be music to the ears of the many companies who are providing AI compute today, including cloud providers, data center operators, server makers, chip makers and foundries. AI will drive all of them.

Even with the added context, Gelsinger joked about the reported $7 trillion amount.

“[My board members] think my capital plans are pretty aggressive,” Gelsinger said. “And they ask me questions like: what wafers are going to fill those factories? Who’s going to design all those products to go on those wafers in those factories? And what’s going to be the economic cash flow returns of those factories as well? And I was only talking tens of billions—that’s before I saw the $7 trillion [figure]!”

“If I had to sit there and correct every mistake and report in the media, I would not be able to do my job,” Altman replied. “But the numbers will be big—that, we probably agree on.”