In an ominous sign for the artificial intelligence industry, OpenAI reported this week that it had missed its targets for new users and revenue. 

The revelation sparked fresh worries about an AI bubble—and an imminent AI crash. According to The Wall Street Journal, OpenAI’s chief financial officer “is worried the company might not be able to pay for future computing contracts if revenue doesn’t grow fast enough.”

Roughly 3,000 data centers are currently operational in the United States, and AI companies are planning to build at least 1,500 more. It’s doubtful the industry can support so many data centers—which is one reason a crash is becoming more probable. 

According to Asad Ramzanali, director of AI and technology policy at the Vanderbilt Policy Accelerator, AI investment is on track to surpass “the Manhattan Project, the expansion of electricity, the Apollo space program, building the interstate highway system, broadband buildout during the dot-com bubble, and every other capital effort in U.S. history, except for the Louisiana Purchase and maybe the peak of railroad construction.”

Also concerning is the risky financial engineering to finance AI infrastructure. “Circular equity investment” and a heavy reliance on “private credit”—i.e., unregulated credit—are reminiscent of the sketchy financial maneuvers that contributed to the 2008 financial crisis.

In this episode of the Monthly podcast, Senior Editor Anne Kim speaks with Ramzanali about his new report After the AI Crash, which offers a blueprint for how to mitigate the impacts of a potential AI-led economic catastrophe. Ending the sector’s over-financialization is one of his many recommendations.

Ramzanali is the director of AI and technology policy for the Vanderbilt Policy Accelerator. He also served as chief of staff and deputy director for strategy at the Office of Science and Technology Policy under President Joe Biden.

This transcript has been edited for length and clarity. The full interview is available at Spotify, YouTube, and iTunes

***

Anne Kim: Let’s start with the title of your report, “After the AI Crash,” which implies that a crash is both inevitable and/or imminent. Why do you think a crash is imminent? In particular, can you talk about the overinvestment you highlight in your report?

Asad Ramzanali: I started the inquiry not assuming we’re in a bubble, but that if we are, we should be prepared. As I got deeper into this, I became convinced that we are in a period of overinvestment where the money going out the door in the industry, which is primarily for data centers and chips, doesn’t match the money coming in. Bain & Company estimates that $2 trillion is what the annual revenue from AI will have to look like [to recoup all this investment].

Anne Kim: Have we ever spent this much in U.S. history on this kind of infrastructure investment?

Asad Ramzanali: There’s a great analysis done by The Wall Street Journal on capital expenditures as a percentage of GDP. And this year alone, if the 2026 capital expenditures numbers for just the hyperscalers pan out to the estimates that they’ve given, we’re talking about a higher percentage of GDP than the Manhattan Project, the expansion of electricity, the Apollo space program, the building of the interstate highway system, the broadband build out in the ‘90s, everything but the Louisiana Purchase. This nets out to about $700 billion of investment this year.

Anne Kim: Wow. And when you talk about “hyperscalers,” who are those companies?

Asad Ramzanali: “Hyperscalers” are the companies that make data centers at a hyperscale. The main companies we’re talking about are Amazon, Microsoft, Google (or Alphabet), and we sometimes throw in Meta and Oracle.

Anne Kim: Let’s talk about the stakes, because your report paints two possible scenarios for a crash. The “best case” scenario is where the crash is contained more or less to the AI sector, and the other scenario is economy-wide. How would those two scenarios play out? And if there is an economy-wide crash, how will it compare to the 2008 financial crisis?

Asad Ramzanali: Let’s call the scenarios “bubble” and “crash” because when a bubble bursts, maybe a little bit of soap goes somewhere, but it’s not going to be a drastic event. People compare it to the dot-com bubble, though that also minimizes the harms. You have to remember that hundreds of thousands of people lost their jobs. A massive amount of stock wealth was lost.  I remember friends’ parents having to delay their retirement.

The 2008 crash went from a sectoral industry crisis in housing to an economy-wide crash because we realized the interconnectedness of the financing tools that were making that housing bubble inflated in the first place. All of that financial interconnectedness is what made it an economy-wide crash.

We’re seeing the same financial interconnectedness. Tech companies now make up one third of the stock market, and banks are invested in those tech companies in myriad ways including private credit, structured finance, many different pools of capital that end up going to a similar place.

That doesn’t create massive problems when you’re talking about a small sector of the economy. But when you’re talking about something that is this large, this high of a magnitude of our whole economy, that’s where I start to get worried about the spillover effects into the rest of the economy.

Anne Kim: Let’s talk a little bit more about that financialization you were talking about, because that is a really significant portion of your report. I remember the 2008 crash too. And one of the disturbing things about your report is just how much history is repeating itself. And by that, I mean, we seem to be repeating some of the same very risky financial maneuvers that led to the 2008 crisis. I remember there were a lot of complex derivative financial securities that no one understood that were layered on top of each other until there was a default somewhere along the line and then the whole thing came crashing down. I want to ask about two particular practices that I would love for you to explain because I think they illustrate how precarious things are right now. The first is the phenomenon of “circular equity investing.” Can you explain what that is and why we should all be really concerned about the impacts of that?

Asad Ramzanali: So “circular equity investing” is the idea that one company invests in another. Now that’s not new. Companies have corporate venture capital. They’ve done investments in each other. And we’ve seen centuries of history of companies lending to their customers. What does appear new and maybe unique, at least at this scale, is companies investing in unprofitable AI companies that are getting money from venture capital, from sovereign wealth funds, from a whole host of sources, and spending that money on the cloud computing layer, or building data centers.

These are all ways to artificially prop up revenues. These aren’t small sums. The other thing I worry about is if one part of that ecosystem starts to become problematic. That cascades because not only is any one company dependent on another for revenues, their valuations are also tied.

Anne Kim: And if one strand breaks, the entire fabric falls apart. So if I understand this right, just to analogize to a different context, what you’re describing is if, say, a meatpacker had stock in McDonald’s, and it was selling its burgers to McDonald’s, and McDonald’s was also invested in the meatpacker.

Asad Ramzanali: It’s like that, but instead of McDonald’s—a well-known company where we can model the revenue and risks—it’s an unprofitable company in a new industry where we don’t understand what’s happening. It’s one thing to invest somewhere where you know where the revenue is coming in and what that looks like. It’s a whole other thing where the company itself is saying “we’re not going to be profitable for at least three to five years, and even then, the profit numbers look really small.”

Anne Kim: The other phenomenon you’ve mentioned as part of this very incestuous financing is the debt that these AI companies are taking on in order to be able to make these investments in each other. Again, just layers and layers and layers of debt, which is also very reminiscent of what happened just before the financial crisis. Can you give us some examples of just how leveraged these companies have become?

Asad Ramzanali: Part of the benefit that investors saw in these big tech companies early on was they almost never took on massive debt loads. Google had $15 billion in long-term debt, which for a company of that size is not that big. And then last year, that shoots up to $50 billion, and now they’re raising a 100-year bond. Facebook’s a similar story where in 2021, they had zero debt, and then they took out $30 billion last year.

What’s tricky about this is that Facebook and others have shifted to private credit, which has no transparency, and we have little understanding of how the risks are spread across the economy. So Hyperion, their biggest data center in Louisiana is a $27 billion facility.  That is not owned by Facebook as an asset. The debt for that $27 billion is not part of the $30 billion bond that I was just talking about. It is a separate private credit debt facility that’s in a special purpose vehicle (SPV) that Meta set up with their private equity partner for that purpose. That’s where I start to get worried about, well, if there is financial pressure, that SPV is going to go under, right?

Anne Kim: When we are talking about private credit, it sounds like it’s somebody else’s money that the public at large doesn’t need to worry about, but that’s actually not true. Private credit just means that these are credit transactions that are happening privately. But a lot of times they’re using money that is coming from institutional investors—people’s pension funds or retirement accounts.

Asad Ramzanali: That’s right. If it goes bad, your life insurance or retirement fund is going to take a hit. Both the tech companies and the banks investing in or dependent on the private credit firms will take a hit when their investment goes sour in a private credit deal that didn’t work out. It’s the kind of market where even the bankers are really getting worried.

Anne Kim: That’s really frightening. When the public hears “private credit,” they really should think “unregulated credit.”  

Okay, so now that we’ve scared everybody, let’s start talking about what Congress should do to mitigate an economic catastrophe—or to help pick up the pieces if or when a crash occurs. First, it seems pretty obvious that the kind of financial engineering that we’re talking about really needs to stop. How would you go about doing that?

Asad Ramzanali: Because circular equity investing appears not to be a common practice in other industries, we should just end that practice. I love how you said we should really think about this as not private credit, but unregulated credit. We need to bring all of these systems into the more transparent, regulated part of the world where we understand risks better. I think we should require transparency in any investment that looks like a data center. 

The other thing that’s distorting markets that’s unappreciated is right now is that we have a race to the bottom among states where they’re giving tax breaks for data center construction. In some states that’s billions of dollars. We should end these subsidies that are distorting the market and making it where we as the public are internalizing an externality of data center costs. It should be the other way—where the builder of the data centers is paying for the full cost, not relying on economic development tax breaks.

Anne Kim: I do think that there is a public backlash now building to those data centers. Maybe political pressure can help end that particular practice.

Asad Ramzanali: It’s actually starting. Governor [JB] Pritzker in Illinois has proposed ending it. The Virginia legislature is looking at ending theirs.

Anne Kim: I live in Northern Virginia. When you drive out to Loudoun County, Virginia, which is, you know, ground zero for some of the larger data centers in the country, it is miles upon miles of these black warehouse squares. It’s very, very dystopian. It’s like you’re driving like through The Matrix or something. It’s crazy!

I also want to ask about another recommendation you have, and that is a “Glass-Steagall Act for AI.” What do you mean by that?

Asad Ramzanali: We’re going to have to go down history lane here for a second. So the Great Depression happens—massive financial crash—and Senator Glass and Representative Steagall, the heads of the Senate and House banking committees, come together on a number of structural reforms to the economy. That era is when we get the SEC, the FDIC—what’s now our modern banking regulatory infrastructure.

What we call the Glass-Steagall Act was really just four sections of the 1933 Banking Act. And the main idea was that commercial activity and investment activity shouldn’t be under the same house. You shouldn’t be able to use my deposits to make risky bets as a bank. That idea wasn’t new in the 1930s. It actually goes back to the 1694 Charter of the Bank of England, where we restricted the activities that a “bank of public consequence” could do.

We started repealing Glass-Steagall, and we formally repealed it in the 1990s. But court cases and agency decisions had already weakened it over time. And then you get the 2008 crisis.

What I argue is that we have a similar risk shifting going on where speculation in one market, the AI market, is leading to overinvestment in another market, which is data centers, cloud computing, and chips.

A company like Google is completely vertically integrated throughout. They own chip design, data centers, cloud computing. They own the physical fiber wires that are connected. They own the whole thing up and down through AI models and where the outputs of those models end up to consumers. That’s the kind of risk that we should separate. We should draw a line between those. We could do a more fine-grained structural separation where data centers also can’t own chip companies. And that is worthy of consideration, but at a minimum, we should separate the most speculative part of the market with the one that we actually depend on as a society.

Anne Kim: Let’s talk about your recommendations for dealing with the human cost. No matter what happens, we’re going to have unemployment—whether AI succeeds or if there’s a crash. You’ve got a menu of suggestions for how to help American workers, but the particular idea I want to zero in on is your recommendation for expanding unemployment insurance rather than universal basic income—UBI—which many people favor. I’m curious about why you went that route instead of jumping onto the UBI bandwagon as a lot of others have done.

Asad Ramzanali: Every time we’ve had a major financial crisis, we have expanded unemployment insurance. If people are out of jobs, the answer is to help them meet their immediate needs, and cash transfers are a good way of doing that.

UBI to me is an interesting policy idea for a whole host of reasons. My view is it’s not a good solution for the kind of job loss that we might initially experience, partly because of the amount of money involved. If you earn $100,000 today and you lose your job, a $2,000 or $3,000 check, which is about the range that people talk about for UBI, is really tough to make ends meet at that point. We know how unemployment insurance works. We’ve done this in the past. The administration of it leaves a lot to be desired, but it’s a thing we understand, and we can act more quickly with it.

Anne Kim: You also talk about what you call a “Digital Works Progress Administration.” How would that work?

Asad Ramzanali: The Works Progress Administration—the original one—was a 1930s-era program that was part of the New Deal, where millions of Americans were out of a job and the government hired them and put them to work for public purposes. What I think might happen here is if we do see large job losses, it’ll be in knowledge work—the kinds of work that you and I do—and that’s actually administratively easier than putting people to work in physically demanding construction or other types of labor jobs. And we have the need for those jobs across levels of government.

If the problem is mass unemployment in a short period, the most direct solution is mass employment.

Anne Kim: So to get some sense of the urgency of this project, will you hazard a guess about when a crash could happen? How soon do we need to start prepping for this?

Asad Ramzanali: I don’t think it’s happening tomorrow, but I also don’t think that we’re that far away from when it happens. These things are really difficult to time because it’s partly driven by expectations of individual investors and individual companies that then cascade throughout the economy.

But to me, there are enough people saying that there might be a problem here that we as a policy community need to prepare.

Our ideas can save democracy... But we need your help! Donate Now!

Anne Kim is a Senior Editor at Washington Monthly and the author of Poverty for Profit: How Corporations Get Rich Off America’s Poor (New Press, 2024).

Anne is also a Senior Fellow at FutureEd and the author of Abandoned: America’s Lost Youth and the Crisis of Disconnection, winner of the 2020 Goddard Riverside Stephan Russo Book Prize for Social Justice. She writes about education, economics, domestic and social policy, and who has access to opportunity in America.

Anne has served as legislative director and deputy chief of staff to Rep. Jim Cooper (D-TN). She's also worked in senior roles at multiple D.C. think tanks, including the Progressive Policy Institute and Third Way, where she was director of the Economic Program and founding director of the Social Policy and Politics Program.

Anne has a bachelor's degree in journalism from the University of Missouri-Columbia and a law degree from Duke University.

Anne is on Bluesky @anne-s-kim.bsky.social‬.