1. 程式人生 > >How Tech Empowers Dangerous Lone Wolves

How Tech Empowers Dangerous Lone Wolves

For almost all of history, our own extinction was rarely much of a topic beyond the realms of prophecy, chicanery, or science fiction. Humans were widespread and resilient, catastrophes were local, and weapons were mostly interpersonal devices.

The Cold War changed all that. We survived it for many reasons, including a lucky distribution of unusually level heads (Google “Stanislav Petrov” or “Vasili Arkhipov,” for instance). Another factor was that only a rotating cast of two-ish people were fully empowered to annihilate us all. Despite their many faults, none of these folks were suicidal desperados. Human extinction then faded as an issue after the USSR’s dying whimper.

Around the turn of the millennium, two thinkers returned it to the agenda. Bill Joy, co-founder of Sun Microsystems, wrote a resounding Wired cover story titled “Why the Future Doesn’t Need Us.” Later, Martin Rees, the U.K.’s Astronomer Royal, released Our Final Hour — the masterpiece of chilling speculation I cited in part one of this essay series

last week (when I also posted a somewhat related podcast interview with Lord Rees).

Less tractable than the number of countries is the number of large companies, which might take risky shortcuts chasing trillion-dollar breakthroughs.

To distill their nuanced works (which each merit a complete reading to this day), certain technologies on the intermediate horizon could one day present deeper perils than a mere nuclear winter. As a top technologist and scientist, respectively, Joy and Rees couldn’t be dismissed as shrieking Luddites.And though spooky, their thinking was also thrillingly new to most of us at the time. Years of water-cooler rehashing ensued throughout the corridors of tech and science.

Joy and Rees lay out multiple risks. Each diverges from your father’s atomic doomsday in that nuclear nations needn’t be cast members. This is a highly escalating factor — because as scary as such nations are, their numbers will remain tractable, despite nuclear proliferation.

Far less tractable is the number of large companies, which might take risky shortcuts chasing trillion-dollar breakthroughs with terrifying long-shot side effects. Even less tractable is the number of erratic startups, which might do the same over a somewhat longer span. Completely intractable is the number of suicidal murderers humanity will generate over the coming decades. And as the ability to create — or merely access — lethal technologies moves down this stack, the hazards will mount.

Much will hinge on the peculiar mix of dangers, incentives, and safety mechanisms that arise as we relentlessly develop certain exponential technologies. No person, group, or nation has a monopoly on any of this. And multiple headlong races between bitter rivals (both countries and companies) are well underway. Caution can vanish when finish lines are at stake. So this race dynamic could be the situation’s gravest aspect.

Today, artificial intelligence (A.I.) and synthetic biology (synbio) are most cited when existential risk is discussed. They may later be joined by nanotechnology, perhaps some forms of geoengineering, plus God knows what.

And to repeat: Should existential threats emerge from these domains, assembly will not require the organized might of nations. Key advances in A.I. and synbio frequently arise from brainy teams of just dozens. Their main outputs are often information and digital methods. These sorts of advances can diffuse with minimal friction — allowing small teams to perch atop mountains of shoulders, spreading the breakthrough potential even wider. Key hardware is often general-purpose gear, the specs of which improve at compounding, exponential rates. Speedy horsepower growth is thus akin to a cheap utility, available to all. Leveraging this, a team might rapidly become hundreds of times more effective without adding a single member.

The Human Genome Project and its aftermath epitomize this. It took 13 years, $3 billion, and thousands of biology’s sharpest minds to sequence a single haploid genome. A decade and a half later, lone lab techs routinely accomplish quite a bit more than this in a single day. A parallel scenario with the Manhattan Project would have put atomic bombs in countless garages and college labs by the early 1960s. Performance improvements in synbio are meanwhile accelerating, not slowing.

So: Do your best to imagine the field’s collective output over the coming decade — and then imagine that gradually becoming an easy day’s work for latter-day undergrads.

Would that be a safer world than one with thousands of sovereign nuclear powers? Perhaps so in 2028. Perhaps not in 2038. There’s no way of knowing—which is terrifying—but we can safely predict that whatever undergrads can achieve in the near future, high school kids will be still more capable shortly thereafter. Then smart eighth-graders. Then dumb fifth-graders. If that doesn’t sound absurd, please reread it, because it really should.

But it’s not absurd. We’ve seen similar things happen repeatedly, and not just in genomics. Imagine, say, the horsepower, data, and services the CIA could cram into its best mobile device in 2005. Foretelling that billions of us would soon pack orders of magnitude more than that would have seemed delusional. But here we are.