1. 程式人生 > >The Deadly Gamble on Super A.I.

The Deadly Gamble on Super A.I.

Years could be spent quibbling with every element of this analysis. So I will be very clear: It is notabout precision. Rather, it’s about scale and reasonableness. I can’t begin to pinpoint the precise level of danger we face from A.I. Nobody can. But I’m confident that we’re multiple orders of magnitude away from the risk levels we accept in the ordinary, nonexponential world.

When a worst-case scenario could kill us all, five nines of confidence is the probabilistic equivalent of 75,000 certain deaths. It’s impossible to put a price on that. But we can note that this is 25 times the death toll of the 9/11 attacks — and the world’s governments spend billions per week fending off would-be sequels to that. Two nines of confidence, or 99 percent, maps to a certaindisaster on the scale World War II. What would we do to avoid that? As for our annual odds of avoiding an obliterating asteroid, they start at eight

nines with no one lifting a finger. Yet we’re investing to improve those odds. We’re doing so rationally. And the budget is growing fast (it was just $20 million in 2012).

So, what should we do about this?

I studied Arabic in college, and these days, I podcast — which is to say, I truly have no idea. But even I know what we should not

do, which is to invest puny resources against this outcome compared to what we spend on airbags and killer asteroids. A super A.I.’s threat profile might easily resemble that of the Cold War. Which is to say: quite uncertain but all too plausible and, much as we’d hate to to admit it, really fucking huge. Humanity didn’t cheap out on navigating its path through the Cold War. Indeed, we spent trillions. And for all the blunders, crimes, and missteps along the way, we came out of that one pretty okay.

I’ll close by noting that the danger here is most likely to stem from the errors of a cocky, elite group, not the malice of a twisted loner. Highly funded and competitive fields studded with geniuses leave no space for solitary wackos to seize the agenda. The dynamics that brought us the Titanic, the Stuxnet virus, World War I, and the financial crisis are far more worrying here.

These wouldn’t be bad guys running amok. They’d mostly be neutral-to-good guys cutting corners. We don’t know the warning signs of a super A.I. project veering off the rails, because no has ever completed one. So corners might be cut out of ignorance. Or to beat Google to market. Or to make sure China doesn’t cross the finish line first. As previously noted, safety concerns can melt away in headlong races, particularly if both sides think an insuperable geopolitical advantage will accrue to the victor. And it doesn’t take something of global consequence to get things moving. Plenty of people are constitutionally inclined to take huge, reckless chances for even a modest upside. Daredevils will risk it all for a small prize and some glory, and society allows this. But the ethics mutate when a lucrative private gamble imperils everyone. Which is to say, when apocalyptic risk is privatized.

Imagine a young, selfish, unattached man who stands to become grotesquely rich if he helps his startup make a huge A.I. breakthrough. There’s a small chance that things could go horribly wrong — and a lack of humility inclines him to minimize this risk. From time immemorial, migrants, miners, and adventurers have accepted far greater personal dangers to chase far smaller gains.

We can’t sanely expect that all of tomorrow’s startup talent will shun this calculus out of respect for strangers, foreigners, or those yet unborn. Many might. But others will sneer at every argument in essays like this, on the grounds that they’re way smarter than Stephen Hawking, Elon Musk, and Bill Gates combined. We all know someone like this. Those of us in tech may know dozens. And the key breakthroughs in this field might require only a tiny handful of them, joined into a confident, brilliant, and highly motivated team.