Sam Altman and the OpenAI logo displayed on a phone and screen after OpenAI fired him as CEO of the company. The rift may have centered on regulation and other AI guardrails. (Photo Illustration by Meir Chaimowitz/NurPhoto via AP)

With the announcement of OpenAI CEO Sam Altman’s surprising ouster, this is a moment to pause and consider where we are with artificial intelligence.

The details of the Altman story will come out, but there’s lots of speculation that the schism at OpenAI was between those who want to accelerate the pace of artificial intelligence development, despite the lack of guardrails, and those who urged caution. This kind of wobbly governance and high drama underscores that we need real oversight. The AI race is out of control. No government, including the United States, has issued mandatory rules on something most experts agree could destroy humanity if left unregulated. Altman was arguably the leader of the pack, but whoever wins the race, we need firm rules for those running it.

Unfortunately, there’s a lot of laissez-faire thinking out there. In his 5,000-word “Techno-Optimist Manifesto,” posted online last month, Marc Andreessen, the billionaire venture capitalist, celebrates technology’s unbounded potential. “There is no material problem that cannot be solved by technology,” he writes. This certitude leads Andreessen to insist that slowing the development of artificial intelligence would be tantamount to murder: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Andreessen’s audacious declaration reflects the so-called effective acceleration movement, or e/acc, which draws from philosopher Nick Land’s theory that technology will accelerate the creation of a Utopia. Followers of this movement often flag it in their bios and LinkedIn pages.

While I, too, am excited about AI’s awesome capabilities, we have to take seriously what the scientists and creators of this revolution are saying about the risks we face.

In his final years, with nothing to gain from a cautionary warning, Stephen Hawking, the theoretical physicist, came to this conclusion about AI and humanity: “The development of full artificial intelligence could spell the end of the human race…It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Hawking’s warning has become even more urgent as AI is unrolled in everything from defense to medicine. Tech leaders are not arguing that machine capabilities won’t surpass that of humans. They are debating when it will happen.

Despite the “Silicon Valley knows best” attitude of some, the AI industry itself is calling for regulation. Brad Smith, the vice-chair and president of Microsoft, which is all in on AI, has said, “Companies need to step up … Government needs to move faster.” Microsoft announced this week that it is Altman and Greg Brockman, OpenAI’s president. The two head an advanced research lab at the technology giant.

Governments are taking good first steps, but they fall short. In July, the White House announced seven companies involved in the development of artificial intelligence had voluntarily committed to managing the risks. That is no small feat, given what it takes to get the leadership of major companies to agree. On Halloween, the eve of the UK AI Safety Summit, President Joe Biden issued a 63-page executive order. Likewise, the G-7 trumpeted its Agreement on International Guiding Principles on AI and a voluntary Code of Conduct as more companies signed the voluntary agreement.

These policy moves address some of the complex issues around AI risk. The problem is that they don’t require that companies take safety and security measures. Companies need only report the measures they took.

Governments need to be courageous and pass legislation enabling effective regulation of advanced AI, and they need to do this within a tight deadline of months, not years. Choke points, kill switches, measures to stop us from going off the cliff—all must be identified and tested now. It is wise to consider our options before they disappear. For example, allowing companies to connect the largest AI systems to the Internet before we know their capabilities could prove a catastrophic, irreversible decision.

The White House Executive Order and the G-7 agreement on International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct have provided us with the roadmap to formal legislation allowing us to regulate this revolutionary technology. Mandatory rules create a level playing field for all competitors. And given the global nature of this competition, governments need to work together to enforce compliance. When the European Union established the Global Data Protection Regulation, an important privacy and cybersecurity measure, in 2016, companies had to comply globally, not just in Europe. This approach works, and we should use it quickly. The EU is set to finalize comprehensive AI regulation this year, including fines to enforce compliance, but those regulations won’t be in effect before 2025. Nobody knows how far AI will evolve in that time, making any delay a risky gamble. Ominously, Meta just disbanded its Responsible AI Team. That seems to be a sign that some companies aren’t taking the voluntary measures seriously.

There are three big strategies government can deploy. Think of them as “Go for Broke,” “Slow Down,” or “Strict Regulation.” Strict regulation based on what has already been agreed is our best bet. Get oversight in place now as we figure out the bigger plan.

We’ve been here before with Big Tech. As social media ascended during the aughts and its promoters were evangelizing the Utopia of a connected world, there were signs of the damage major platforms could inflict. But those pointing this out were ignored. Tech leaders didn’t set out to harm teenage girls, promote religious and ethnic violence, or undermine elections—that was collateral damage in the pursuit of growth. AI’s downside could be more ominous.

This isn’t the first time companies have had to figure out how to do business responsibly. When I was at Nike, we built corporate social responsibility with an aperture wide enough to take in impacts and consequences of all kinds—on the products, the profits, and people—as we dealt with issues such as labor conditions. The industry found a common point on the compass as our shared goal. And companies like Nike grew as they pursued growth and responsibility simultaneously. The AI challenge is much more formidable and will take greater collaboration by companies and governments, but the roadmap is right in front of our faces.

Thankfully, it’s not too late. We have a rare—if fleeting—opportunity to act before AI-driven tools become ubiquitous, their dangers normalized, and what’s unleashed can’t be controlled, just as we’ve seen with social media intertwined so deeply in lives it seems impossible now to reign it in. We won’t have the chance to retrofit the AI industry. Companies are creating the products; they can enact the safety controls on deadline, just as other industries do. It can, on average, take 10-15 years to get a new drug to market safely. At this moment, AI developers can just speed ahead, yelling out the window. “I’m working on those reports I’m required to submit!”

This is a historic moment, and we need the kind of binding collaboration we have with nuclear treaties. Companies and governments shouldn’t have the right to take their time when humanity is at risk. The question is what mechanisms we have to deploy, not how long we think the disaster is from happening.

Once those safety measures are robust and functioning, then there’s cause for real techno-optimism, and who runs one particular company won’t matter as much.

Our ideas can save democracy... But we need your help! Donate Now!

Maria Eitel served as Nike's founding Vice President of Corporate Responsibility before founding the Nike Foundation and Girl Effect and is a board director at Cloudflare, Inc.