Open AI’s Sam Altman: Building a strategic national reserve of computing power makes a lot of sense

I would like to clarify a few important points regarding OpenAI’s approach to infrastructure and government involvement.

First, and most importantly: we do **not** have or want government guarantees for OpenAI datacenters. We believe governments should not pick winners or losers, and that taxpayers should not be on the hook to bail out companies that make poor business decisions or fail in the market. If one company fails, others will continue doing good work.

What we do think might make sense is for governments to build and own their own AI infrastructure. In that case, the upside should flow back to the government as well. We can imagine a future where governments decide to offtake large amounts of computing power and determine how to use it. It may even make sense to provide a lower cost of capital to support this. Building a strategic national reserve of computing power is a sensible move—but it should be for the government’s benefit, not to subsidize private companies.

The one area where we have discussed loan guarantees is related to supporting the buildout of semiconductor fabs in the U.S. We, along with other companies, have responded to the government’s call in this area and would be happy to help (although we have not formally applied). The basic idea here is to ensure that the chip supply chain is as American as possible, bringing jobs and industrial revitalization back to the U.S. while enhancing the country’s strategic position with an independent supply chain. This benefits all American companies but is fundamentally different from governments guaranteeing datacenter buildouts that benefit private firms.

### Addressing Common Questions and Concerns

There are at least three key “questions behind the question” that understandably raise concerns:

**1. How is OpenAI going to pay for all this infrastructure it is committing to?**

We expect to finish this year with an annualized revenue run rate above $20 billion, growing to hundreds of billions by 2030. We are looking at commitments of about $1.4 trillion over the next eight years. This requires continued revenue growth, and each doubling is a significant effort!

That said, we are optimistic about our prospects. We are excited about upcoming enterprise offerings, and new categories such as consumer devices and robotics also have huge potential. There are even newer areas, like AI-driven scientific discovery, which are harder to quantify but extremely promising.

We’re also exploring ways to sell compute capacity directly to other companies and individuals. We strongly believe the world will need a lot of “AI cloud,” and we’re eager to provide it. While we may raise more equity or debt capital in the future, everything we see suggests demand for computing power will far outpace what we’re currently planning for.

**2. Is OpenAI trying to become “too big to fail,” and should the government pick winners and losers?**

Our answer is a firm **no**. If we make mistakes and can’t fix them, we should fail, and other companies will continue to service customers and innovate. That’s how capitalism works, and the ecosystem and economy will be fine.

Our CFO recently discussed government financing but later clarified her remarks, emphasizing that we could have been clearer. As noted above, we believe the U.S. government should develop a national strategy for its own AI infrastructure.

Tyler Cowen asked me about the federal government acting as an insurer of last resort for AI risks (similar to nuclear power), but with a different focus than overbuilding capacity. I said I do think the government ends up as insurer of last resort, but not in the way of underwriting insurance policies like in nuclear energy.

This was a completely different context from datacenter buildout or bailing out a company. What we’re talking about is addressing catastrophic risks—such as a rogue actor using AI to coordinate a large-scale cyberattack that disrupts critical infrastructure. In such cases, intentional misuse of AI could cause harm at a scale only governments can manage.

We do not believe the government should be writing insurance policies to protect AI companies.

**3. Why invest so heavily now, instead of growing more slowly?**

We are building infrastructure for a future economy powered by AI, and based on our research and trends in AI usage, this is the right time to scale our technology.

Massive infrastructure projects take years to build, so starting now is essential. The risk of underinvesting—and thus not having enough computing power—is more significant and likely than the risk of having excess capacity.

Even today, both OpenAI and others must rate limit products and cannot offer new features or models because of severe compute constraints.

In a future where AI enables major scientific breakthroughs—though at tremendous computational cost—we want to be ready to meet that moment. We no longer see this as distant.

Our mission is to accelerate applying AI to hard problems, such as curing deadly diseases, and to bring the benefits of artificial general intelligence (AGI) to people as soon as possible.

We also envision a world with abundant and affordable AI, with massive demand that can enhance quality of life in countless ways.

It is a great privilege to compete in this arena and to have the conviction to build infrastructure at such scale for so important a purpose. This is the bet we are making, and given our perspective, we feel good about it.

Of course, we could be wrong—and when that happens, it will be the market, not the government, that determines the outcome.
https://www.shacknews.com/article/146718/openai-ceo-sam-altman-government-ai

Leave a Reply

Your email address will not be published. Required fields are marked *