Skip to main content
CampaignsEqualityHousingEnvironmentMigrationEducationRaceCultureWorkGlobalVacancies

Why tech bosses are both pushing AI and warning against it

Are Artificial Intelligence execs more worried about an AI apocalypse - or about regulation?

June 01 2023, 11.26am
Content
Text

Turkeys don’t vote for Christmas, and the heads of companies ploughing billions of dollars into the development of new artificial intelligence (AI) tech don’t willingly compare themselves to pandemics and nuclear war.

Yet that’s exactly what has happened: the leaders of OpenAI, Google DeepMind, and major AI company Anthropic; the chief scientific and technical officers of Microsoft, which has invested more than $10 billion into OpenAI; and the creators of ChatGPT, have all put their names to a statement published by San Francisco non-profit the Center for AI Safety.

The statement itself is no long treatise: at just 22 words, it’s short enough to fit into a tweet. But its message is powerful. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it says.

Top of the list of those who have signed the statement include the likes of Geoffrey Hinton, an emeritus professor of computer science at the University of Toronto, who recently left Google’s AI team warning of the risks of untrammelled development of AI, and Yoshua Bengio, a Montreal computer scientist who is often described as one of the living godfathers of AI, alongside Hinton and Yann Lecun; notably he is the only one of the three who didn’t take a job at a big tech company.

That Hinton leads the list of signatories is unsurprising. He very publicly  resigned from Google earlier this month, concerned that the tech giant was pursuing supremacy over Microsoft at the expense of responsible development – something Google disputed. Many  others among the 350 people who have put their name to the statement share similar concerns. They are the sensible middle, rattled by the pace of development and worried  that corners are being cut, or that discussions about propriety are raced through for fear of being left in the dust compared to the competition.

In the two decades since social media elevated “disruption” to a virtue, we’ve become much more canny about how we approach tech

These concerned insiders are joined by agitated outsiders, who might seem like unusual company for the big tech boosters and executives. They’re the existential threat doomers, the digital equivalent of preppers who have been warning, Cassandra-like, that an all-seeing, all-knowing AI is just around the corner, waiting to enslave us. (So far, their doom-laden predictions haven’t come to fruition; that said we’ve arguably never seen as much development in the field of AI as we have in the last six months or so.)

But there is another, and much more curious, group of signatories: the very same people who’ve poured rocket fuel into the AI development race. Chief among them are Sam Altman, the CEO of OpenAI, and his OpenAI co-founder Ilya Sutskever. These are the same people who released AI tools like  ChatGPT and tin beta form to be used and improved by literally more than 100 million people. They are joined by  Demis Hassabis, the CEO of OpenAI’s chief competitor, Google DeepMind, and a number  of other significant players, including Emad Mostaque, the CEO of Stability AI, which produces AI image generator Stable Diffusion, and Kevin Scott and Eric Horvitz, CTO and CSO of Microsoft respectively. So why are the same people rocking the boat sounding the alarm?

The reason becomes clear when you look at the broader context. In the two decades since social media elevated “disruption” to a virtue, we’ve become much more canny about how we approach tech. We know that Silicon Valley executives don’t actually need zero boundaries or restraints to grow big, despite what they told us in the early stages of their own development. We learned that sunlight isn’t the best, or at least not the only required disinfectant, as they claimed when they put ostentatiously few rules on their platforms.

Bluntly: we learned not to trust tech to keep its own house in order.

Which is why the leaders of those companies spearheading the AI revolution are so keen to get ahead of the problem

They know that we’ve been burned by the actions of the last generation of tech giants, and we’re no longer trusting of their motives. They know that regulation of the type outlined above is inevitable in one form or another. And so, if it’s going to happen, they’d like to have a role to play in shaping it.

It’s notable that OpenAI’s Altman signed this letter a week after starting a “world tour” of capitals, meeting with politicians who are looking to draw a lasso around his company’s development. On that same tour, Altman waved a stick alongside dangling a carrot, saying that OpenAI might be forced to leave the EU if rules brought in to regulate it were too stringent. It’s also notable Google DeepMind’s Hassabis put his name to the letter shortly after his ultimate boss, Sundar Pichai, underwent a similar tour of capitals to meet with government leaders.

This week's statement – kept deliberately short and vague, not actually saying that AI is currently an existential risk, nor that it definitely will be, but instead that we should keep an eye on it in case it becomes one – has gained arguably more attention than a previous open letter, published in March and signed by hundreds more people than the current one. 

In part, that’s precisely because the leaders of AI companies have signed this simpler one, while staying away from the previous letter. But it’s important we keep the full range of reasons for their participation in mind. 

On this range is the possibility  that the executives don’t necessarily themselves  believe the risks of AI are that real, or that severe. It’s that they know politicians are much more aware now of the need to regulate AI and are willing to step in – in lockstep with their voters. (More than half of Americans believe Congress should take “swift action to regulate AI”.)

But the executives do know that the risks of AI regulation are real – to them – and figure it’s best to be inside the tent looking out, than outside the tent looking in.

You might also like...