OpenAI CEO Sam Altman is no stranger to the concept of artificial general intelligence, or AGI, the hypothetical future moment at which machines become capable of completing intellectual tasks at the level of a human — or higher.
And when it comes to our fate as a species on Earth, Altman has some mixed feelings about the tech becoming too powerful.
With the dawn of the company’s groundbreaking chatbot tool ChatGPT, the concept has never been more relevant. In fact, over 1,100 experts, CEOs, and researchers — including SpaceX CEO Elon Musk — recently signed a letter calling for a six-month moratorium on “AI experiments” that take the technology beyond GPT-4, OpenAI’s recently released large language model.
Others have gone as far as to argue that advanced AI should be outlawed, and even that we should “destroy a rogue datacenter by airstrike” to stop the spread of a superhuman AGI.
To Altman, intriguingly, these concerns are both rational — and completely overblown at the same time. In fact, sometimes it sounds as though he’s predicting whatever’s convenient at the moment: danger when he needs to build hype, and safety when he needs to tamp it down.
“I try to be upfront,” he told The New York Times back in 2019. “Am I doing something good? Or really bad?”
The answer to that question is seemingly still up for debate for the CEO.
At the time, he likened OpenAI’s work to the Manhattan Project, the United States’ efforts to develop the atomic bomb during World War 2. While he told the newspaper he thought AGI could bring a huge amount of wealth to the people, he also admitted that it may end up ushering in the apocalypse.
Now that ChatGPT is out in the open and the discussion surrounding the safety of AI is at a fever pitch — something that has brought tremendous wealth to OpenAI — Altman is singing a notably different tune.
Altman is now arguing that the concerns voiced in the recently published letter are overblown.
“The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” the CEO told the NYT in a more recent interview, adding that we have enough time to get ahead of these problems.
In many ways, it’s a pretty convenient rhetorical tack to take for a CEO who has directly benefited from bringing tools like GPT-4 to the market.
“In a single conversation,” Kelly Sims, a board adviser to OpenAI, told the NYT, “he is both sides of the debate club.”
And while, according to The Wall Street Journal, Altman doesn’t have a direct stake in the financial success of OpenAI, he clearly has plenty to gain from reassuring investors that AGI isn’t about to bring down civilization.
But is money even a motivator for Altman, an investor who has already amassed a fortune long before taking the reigns at OpenAI?
Longtime mentor Paul Graham told the NYT that it’s likely because “he likes power” and that he’s “working on something that won’t make him richer” because that’s what “lots of people do that once they have enough money, which Sam probably does.”
In short, Altman seems to want to remake the world — even if he’s not sure how his tech will remake it.
And whether he’s figuratively working on the atomic bomb that could wipe out humanity or tech that could save it doesn’t seem to matter much to him.