Saturday, 13 February 2021 04:25

Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?

Rate this item
(0 votes)

One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent ideas, discoveries and inventions. Over the course of history, we have extracted many balls. Most have been beneficial to humanity. The rest have been various shades of grey: a mix of good and bad, whose net effect is difficult to estimate.

What we haven’t pulled out yet is a black ball: a technology that invariably destroys the civilisation that invents it. That’s not because we’ve been particularly careful or wise when it comes to innovation. We’ve just been lucky. But what if there’s a black ball somewhere in the urn? If scientific and technological research continues, we’ll eventually pull it out, and we won’t be able to put it back in. We can invent but we can’t un-invent. Our strategy seems to be to hope that there is no black ball.

Thankfully for us, humans’ most destructive technology to date – nuclear weapons – is exceedingly difficult to master. But one way to think about the possible effects of a black ball is to consider what would happen if nuclear reactions were easier. In 1933, the physicist Leo Szilard got the idea of a nuclear chain reaction. Later investigations showed that making an atomic weapon would require several kilos of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, imagine a counterfactual history in which Szilard realised that a nuclear bomb could be made in some easy way – over the kitchen sink, say, using a piece of glass, a metal object and a battery.

Szilard would have faced a dilemma. If he didn’t tell anyone about his discovery, he would be unable to stop other scientists from stumbling upon it. But if he did reveal his discovery, he would guarantee the further spread of dangerous knowledge. Imagine that Szilard confided in his friend Albert Einstein, and they decided to write a letter to the president of the United States, Franklin D Roosevelt, whose administration then banned all research into nuclear physics outside of high-security government facilities. Speculation would swirl around the reason for the heavy-handed measures. Groups of scientists would wonder about the secret danger; some of them would figure it out. Careless or disgruntled employees at government labs would let slip information, and spies would carry the secret to foreign capitals. Even if by some miracle the secret never leaked, scientists in other countries would discover it on their own.

Or perhaps the US government would move to eliminate all glass, metal and sources of electrical current outside of a few highly guarded military depots? Such extreme measures would meet with stiff opposition. However, after mushroom clouds had risen over a few cities, public opinion would shift. Glass, batteries and magnets could be seized, and their production banned; yet pieces would remain scattered across the landscape, and eventually they would find their way into the hands of nihilists, extortionists or people who just want ‘to see what would happen’ if they set off a nuclear device. In the end, many places would be destroyed or abandoned. Possession of the proscribed materials would have to be harshly punished. Communities would be subject to strict surveillance: informant networks, security raids, indefinite detentions. We would be left to try to somehow reconstitute civilisation without electricity and other essentials that are deemed too risky.

That’s the optimistic scenario. In a more pessimistic scenario, law and order would break down entirely, and societies would split into factions waging nuclear wars. The disintegration would end only when the world had been ruined to the point where it was impossible to make any more bombs. Even then, the dangerous insight would be remembered and passed down. If civilisation arose from the ashes, the knowledge would lie in wait, ready to pounce once people started again to produce glass, electrical currents and metal. And, even if the knowledge were forgotten, it would be rediscovered when nuclear physics research resumed.

In short: we’re lucky that making nuclear weapons turned out to be hard. We pulled out a grey ball that time. Yet with each act of invention, humanity reaches anew into the urn.

Suppose that the urn of creativity contains at least one black ball. We call this ‘the vulnerable world hypothesis’. The intuitive idea is that there’s some level of technology at which civilisation almost certainly gets destroyed, unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. Our primary purpose isn’t to argue that the hypothesis is true – we regard that as an open question, though it would seem unreasonable, given the available evidence, to be confident that it’s false. Instead, the point is that the hypothesis is useful in helping us to bring to the surface important considerations about humanity’s macrostrategic situation.

The above scenario – call it ‘easy nukes’ – represents one kind of potential black ball, where it becomes easy for individuals or small groups to cause mass destruction. Given the diversity of human character and circumstance, for any imprudent, immoral or self-defeating action, there will always be some fraction of humans (‘the apocalyptic residual’) who would choose to take that action – whether motivated by ideological hatred, nihilistic destructiveness or revenge for perceived injustices, as part of some extortion plot, or because of delusions. The existence of this apocalyptic residual means that any sufficiently easy tool of mass destruction is virtually certain to lead to the devastation of civilisation.

This is one of several types of possible black balls. A second type would be a technology that creates strong incentives for powerful actors to cause mass destruction. Again, we can turn to nuclear history: after the invention of the atomic bomb, an arms race ensued between the US and the Soviet Union. The two countries amassed staggering arsenals; by 1986, together they held more than 60,000 nuclear warheads – more than enough to devastate civilisation.

Fortunately, during the Cold War, the world’s nuclear superpowers didn’t face strong incentives to unleash nuclear Armageddon. They did face some incentives to do so, however. Notably, there were incentives for engaging in brinkmanship; and, in a crisis situation, there was some incentive to strike first to pre-empt a potentially disarming strike by the adversary. Many political scientists believe that an important factor in explaining why the Cold War didn’t lead to a nuclear holocaust was the development, by the mid-1960s, of more secure ‘second strike’ capabilities by both superpowers. The ability of both countries’ arsenals to survive a nuclear strike by the other and then launch a retaliatory assault reduced the incentive to launch an attack in the first place.

But now consider a counterfactual scenario – a ‘safe first strike’ – in which some technology made it possible to completely destroy an adversary before they could respond, leaving them unable to retaliate. If such a ‘safe first strike’ option existed, mutual fear could easily trigger a dash to all-out war. Even if neither power desired the destruction of the other side, one of them might nevertheless feel compelled to strike first to avert the risk that the other side’s fear might lead it to carry out such a first strike. We can make the counterfactual even worse by supposing that the weapons involved are easy to hide; that would make it unfeasible for the parties to design a trustworthy verification scheme for arms reduction that might resolve their security dilemma.

Climate change can illustrate a third type of black ball; let’s call this scenario ‘worse global warming’. In the real world, human-caused emissions of greenhouse gases are likely to result in an average temperature rise of between 3.0 and 4.5 degrees Celsius by 2100. But imagine that the Earth’s climate sensitivity parameter had been different than it is, such that the same carbon emissions would cause far more warming than scientists currently predict – a rise of 20 degrees, say. To make the scenario worse, imagine that fossil fuels were even more abundant, and clean energy alternatives more expensive and technologically challenging, than they actually are.

Unlike the ‘safe first strike’ scenario, where there’s a powerful actor who faces strong incentives to take some difficult and enormously destructive action, the ‘worse global warming’ scenario requires no such actor. All that’s required is a large number of individually insignificant actors – electricity users, drivers – who all have incentives to do things that contribute very slightly to what cumulatively becomes a civilisation-devastating problem. What the two scenarios have in common is that incentives exist that would encourage a wide range of normally motivated actors to pursue actions that devastate civilisation.

It would be bad news if the vulnerable world hypothesis were correct. In principle, however, there are several responses that could save civilisation from a technological black ball. One would be to stop pulling balls from the urn altogether, ceasing all technological development. That’s hardly realistic though; and, even if it could be done, it would be extremely costly, to the point of constituting a catastrophe in its own right.

Another theoretically possible response would be to fundamentally reengineer human nature to eliminate the apocalyptic residual; we might also do away with any tendency among powerful actors to risk civilisational devastation even when vital national security interests are served by doing so, as well as any tendency among the masses to prioritise personal convenience when this contributes an imperceptible amount of harm to some important global good. Such global preference reengineering seems very difficult to pull off, and it would come with risks of its own. It’s also worth noting that partial success in such preference reengineering wouldn’t necessarily bring a proportional reduction in civilisational vulnerability. For example, reducing the apocalyptic residual by 50 per cent wouldn’t cut the risks from the ‘easy nukes’ scenarios in half, since in many cases any lone individual could single-handedly devastate civilisation. We could only significantly reduce the risk, then, if the apocalyptic residual were virtually entirely eliminated worldwide.

That leaves two options for making the world safe against the possibility that the urn contains a black ball: extremely reliable policing that could prevent any individual or small group from carrying out highly dangerous illegal actions; and two, strong global governance that could solve the most serious collective action problems, and ensure robust cooperation between states – even when they have strong incentives to defect from agreements, or refuse to sign on in the first place. The governance gaps addressed by these measures are the two Achilles’ heels of the contemporary world order. So long as they remain unprotected, civilisation remains vulnerable to a technological black ball. Unless and until such a discovery emerges from the urn, however, it’s easy to overlook how exposed we are.

Let’s consider what would be required to protect against these vulnerabilities.

Imagine that the world finds itself in a scenario akin to ‘easy nukes’. Say somebody discovers a very simple way to cause mass destruction, information about the discovery spreads, and the materials are ubiquitously available and cannot quickly be removed from circulation. To prevent devastation, states would need to monitor their citizens closely enough to let them intercept anyone who begins preparing an act of mass destruction. If the black ball technology is sufficiently destructive and easy to use, even a single person evading the surveillance network would be completely unacceptable.

For a picture of what a really intensive level of surveillance could look like, consider the following sketch of a ‘high-tech panopticon’. Every citizen would be fitted with a ‘freedom tag’ (the Orwellian overtones being of course intentional, to remind us of the full range of ways in which such a system could be applied). A freedom tag might be worn around the neck and equipped with multidirectional cameras and microphones that would continuously upload encrypted video and audio to computers that interpret the feeds in real time. If signs of suspicious activity were detected, the feed would be relayed to one of several ‘patriot monitoring stations’, where a ‘freedom officer’ would review the feed and determine an appropriate action, such as contacting the tag-wearer via a speaker on the freedom tag – to demand an explanation or request a better view. The freedom officer could dispatch a rapid response unit, or maybe a police drone, to investigate. If a wearer refused to desist from the proscribed activity after repeated warnings, authorities could arrest him or her. Citizens wouldn’t be permitted to remove the tag, except in places that had been fitted with adequate external sensors.

In principle, such a system could feature sophisticated privacy protections, and could redact identity-revealing data such as faces and names unless needed for an investigation. Artificial intelligence tools and human oversight could closely monitor freedom officers to prevent them from abusing their authority. Building a panopticon of this kind would require substantial investment. But thanks to the falling price of the relevant technologies, it could soon become technically feasible.

That’s not the same thing as being politically feasible. Resistance to such steps, however, might subside once a few major cities had been wiped out. There would likely be strong support for a policy which, for the sake of forestalling another attack, involved massive privacy invasions and civil rights violations such as incarcerating 100 innocent people for every genuine plotter. But when civilisational vulnerabilities aren’t preceded or accompanied by such incontrovertible evidence, the political will for such robust preventive action might never materialise.

Or consider again the ‘safe first strike’ scenario. Here, state actors confront a collective action problem, and failing to solve it means civilisation gets devastated by default. With a new black ball, the collective action problem will almost certainly present extreme and unprecedented challenges – yet states have frequently failed to solve much easier collective action problems, as attested by the pockmarks of war that cover human history from head to foot. By default, therefore, civilisation gets devastated. With effective global governance, however, the solution is almost trivial: simply prohibit all states from wielding the black ball destructively. (By effective global governance, we mean a world order with one decision-making entity – a ‘singleton’. This is an abstract condition that could be satisfied through different arrangements: a world government; a sufficiently powerful hegemon; a highly robust system of inter-state cooperation. Each arrangement comes with its own difficulties, and we take no stand here on which is best.)

Some technological black balls could be addressed with preventive policing alone, while some would require only global governance. Some, however, would require both. Consider a biotechnological black ball that’s powerful enough that a single malicious use could cause a pandemic that would kill billions of people – an ‘easy nukes’ type situation. In this scenario, it would be unacceptable if even a single state failed to put in place the machinery necessary for continuous surveillance of its citizens to prevent malicious use with virtually perfect reliability. A state that refused to implement the requisite safeguards would be a delinquent member of the international community, akin to a ‘failed state’. A similar argument applies to scenarios such as ‘worse global warming’, in which some states might be inclined to free-ride on the costly efforts of others. An effective global governance institution would then be needed to compel every state to do its part.

None of this seems very appealing. A system of total surveillance, or a global governance institution capable of imposing its will on every nation, could have very bad consequences. Improved means of social control could help protect despotic regimes from rebellion; and surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on all aspects of life. Global governance, meanwhile, could reduce beneficial forms of inter-state competition and diversity, creating a world order with a single point of failure; and, being so far removed from individuals, such an institution might be perceived to lack legitimacy, and be more susceptible to bureaucratic sclerosis or political drift away from the public interest.

Yet as difficult as many of us find them to stomach, stronger surveillance and global governance could also have various good consequences, aside from stabilising civilisational vulnerabilities. More effective methods of social control could reduce crime and alleviate the need for harsh criminal penalties. They might foster a climate of trust that enables beneficial new forms of social interaction to flourish. Global governance could prevent all kinds of interstate wars, solve many environmental and other commons problems, and over time perhaps foster an enlarged sense of cosmopolitan solidarity. Clearly, there are weighty arguments for and against moving in either direction, and we offer no judgment here about the balance of these arguments.

What about the question of timing? Even if we became seriously concerned that the urn of invention contained a black ball, we might not need to establish stronger surveillance or global governance right now. Perhaps we could take those steps later, if and when the hypothetical threat comes clearly into view.

We should, however, question the feasibility of a wait-and-see approach. As we’ve seen, throughout the Cold War, the two superpowers lived in continuous fear of nuclear annihilation, which could have been triggered at any time by accident or as the result of some spiralling crisis. This risk would have been substantially reduced simply by getting rid of all or most nuclear weapons. Yet, after more than half a century, we’ve still seen only limited disarmament. So far, the world has proved unable to solve this most obvious of collective action problems. This doesn’t inspire confidence that humanity would quickly develop an effective global governance mechanism, even should a clear need for one present itself.

Even if one felt optimistic that an agreement could eventually be reached, international collective action problems can resist solution for a long time. It would take time to explain why such an arrangement was necessary, to negotiate a settlement and hammer out the details, and to set it up. But the interval between a risk becoming clearly visible and the point when stabilisation measures must be in place could be short. So it might not be wise to rely on spontaneous international cooperation to save the day once a serious vulnerability comes into view.

The situation with preventive policing is similar in some respects. A highly sophisticated global panopticon can’t be conjured up overnight. It would take many years to implement such a system, not to mention the time required to build political support. Yet the vulnerabilities we face might not offer much advance warning. Next week, a group of academic researchers could publish an article in Science explaining an innovative new technique in synthetic biology. Two days later, a popular blogger might write a post that explains how the new tool could be used by anybody to cause mass destruction. In such a scenario, intense social control might need to be switched on almost immediately. It would be too late to start developing a surveillance architecture when the specific vulnerability became clear.

Perhaps we could develop the capabilities for intrusive surveillance and real-time interception in advance, but not use those capabilities initially to anything like their maximal extent. By giving civilisation the capacity for extremely effective preventive policing, at least we would have moved closer to stability. But developing a system for ‘turnkey totalitarianism’ means incurring a risk, even if the key isn’t turned. One could try to mitigate this by aiming for a system of ‘structured transparency’ that builds in protections against misuse. The system could operate only with permission from multiple independent stakeholders, and provide only the specific information that’s legitimately needed by some decision-maker. There might be no fundamental barrier to achieving a surveillance system that’s at once highly effective and resistant to being subverted. How likely this is to be achieved in practice is of course another matter.

Given the complexity of these potential general solutions to the risk of a technological black ball, it might make sense for leaders and policymakers to focus initially on partial solutions and low-hanging fruit – patching up particular domains where major risks seem most likely to appear, such as biotechnological research. Governments could strengthen the Biological Weapons Convention by increasing its funding and granting it verification powers. Authorities could step up their oversight of biotechnology activities by developing better ways to monitor scientists and track potentially dangerous materials and equipment. To prevent do-it-yourself genetic engineering, for example, governments could impose licensing requirements and limit access to some cutting-edge instruments and information. Rather than allowing anybody to buy their own DNA synthesis machine, such equipment could be limited to a small number of closely monitored providers. Authorities could also improve whistleblower systems, to encourage the reporting of potential abuse. They could admonish organisations that fund biological research to take a broader view of the potential consequences of such work.

Nevertheless, while pursuing such limited objectives, one should bear in mind that the protection they offer covers only special subsets of scenarios, and might be temporary. If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in those domains might be the only way to stabilise our civilisation against emerging technological vulnerabilities.

This article draws on the paper ‘The Vulnerable World Hypothesis’ (2019) published in the journal ‘Global Policy’.

 

Aeon

November 25, 2024

From zero to $10 billion annual transactions: How Jiji became one of Nigeria’s e-commerce leaders

When Jiji launched in 2014, it entered a competitive e-commerce market in Nigeria, joining the…
November 24, 2024

PDP governors urge Tinubu to review economic policies amid rising hardship

Governors elected on the platform of the Peoples Democratic Party (PDP) have called on President…
November 24, 2024

Older adults opened up about things they ‘took for granted’ in their 20s and 30s

Last month, we wrote a post where older adults from the BuzzFeed Community shared things…
November 16, 2024

Influencer eats pig feed in extreme attempt to save money

Popular Douyin streamer Kong Yufeng recently sparked controversy in China by eating pig feed on…
November 22, 2024

FG excited as pro-Biafra agitator Simon Ekpa arrested in Finland on terrorism charges

Simon Ekpa, the controversial leader of the pro-Biafra faction Autopilot, was arrested by Finnish authorities…
November 25, 2024

Here’s the latest as Israel-Hamas war enters Day 416

Hezbollah rockets land near Tel Aviv after large Israeli strike on Beirut Lebanon's Hezbollah movement…
November 21, 2024

Nigeria comes top in instant payment system inclusivity index in Africa

Nigeria’s instant payment system is projected to advance to the maturity inclusion spectrum ahead of…
October 27, 2024

Nigeria awarded 3-0 win over Libya after airport fiasco

Nigeria have been awarded a 3-0 victory over Libya, and three vital points, from their…

NEWSSCROLL TEAM: 'Sina Kawonise: Publisher/Editor-in-Chief; Prof Wale Are Olaitan: Editorial Consultant; Femi Kawonise: Head, Production & Administration; Afolabi Ajibola: IT Manager;
Contact Us: [email protected] Tel/WhatsApp: +234 811 395 4049

Copyright © 2015 - 2024 NewsScroll. All rights reserved.