Book Review: James Barrat’s “Our Final Invention”

by Miles Raymer

Barrat

James Barrat’s Our Final Invention: Artificial Intelligence and the End of the Human Era is a disturbing, plangent response to the rosy-minded, “rapture of the nerds” mentality that has recently swept across the futurist landscape. Toeing the line between rational prudence and alarmist hand-wringing, Barrat makes the case not only that advanced artificial intelligence is just around the corner, but also that its chances of causing humanity’s extinction are much higher than anyone wants to admit. I found his arguments convincing to an extent, but ultimately did not find the problem of AI (hostile or otherwise) as worrisome as Barrat seems to think everyone should. I do think, however, that dissenting voices are important when it comes to the dangers of technology; we should be grateful for Barrat’s concern and diligence in trying to warn us, even if we disagree with him about the nature and/or degree of risk.

Our Final Invention is highly accessible to readers unfamiliar with the technical aspects of AI, a reflection of Barrat’s laudable assertion that AI is “a problem we all have to confront, with the help of experts, together” (160). The main issue that requires confrontation, in Barrat’s view, is that we are fast-approaching the creation of AGI (artificial general intelligence, or human level AI), which could very quickly begin improving itself, leading to ASI (artificial superintelligence). At this point, Barrat claims, ASI may be completely indifferent or overtly hostile to humans, leading our our marginalization or outright extinction.

Setting aside the contentious question of whether we are truly as close to AGI as Barrat and others think we are, we ought to be invested in techniques that can help ensure (or at least improve the chances) that AGI is safe, or “friendly” to humans. Here we run into our first big problem, which is that our understanding of morality is so poor that we can’t begin to conceive of how to effectively insert an ethical concern for the well being of humanity into an AI’s programming. Luke Muehlhauser articulates this problem in his book Facing the Intelligence Explosion:

Since we’ve never decoded an entire human value system, we don’t know what values to give an AI. We don’t know what we wish to make. If we created superhuman AI tomorrow, we could only give it a disastrously incomplete value system, and then it would go on to do things we don’t want, because it would be doing what we wished for instead of what we wanted. (loc. 991-8, emphasis his)

Another aspect of this same problem can be understood through Nick Bostrom’s rather comic but effective “paper clip maximizer” scenario, a kind of Sorcerer’s Apprentice knock-off in which we create an AI to manufacture paper clips and quickly find it has converted all of Earth’s matter and the rest of the solar system into paper clip factories (56). The point is that without exhaustive knowledge of exactly what we want from AI, there will always be the possibility that it will turn on us or decide the atoms of human civilization can be put to better use elsewise.

While this is an undoubtedly important risk to consider, its edge is blunted somewhat by the reality that our definitions of “intelligence” are in many ways just as shoddy as our understanding of ethics. Even amateur enthusiasts like myself understand that––similar to consciousness––the more we learn about intelligence, the more mystifying and elusive the concept becomes. Current findings are extremely general: intelligence depends on highly organized, reentrant pattern recognition mechanisms that resulted (at least in the human case) from epic spans of evolutionary trial and error. Barrat quotes AI researcher Eliezer Yudkowsky: “‘It took billions of years for evolution to cough up intelligence. Intelligence is not emergent in the complexity of life. It doesn’t happen automatically. There is optimization pressure with natural selection’” (124). Does that sound like something we could cobble together digitally after less than a century’s experience with modern technology?

Brain engineer Rick Granger goes further:

We think we can write down what intelligence is…what learning is…what adaptive abilities are. But the only reason we even have any conception of those things is because we observe humans doing “intelligent” things. But just seeing humans do it does not tell us in any detail what it is that they’re actually doing. The critical questions is this: what’s the engineering specification for reasoning and learning? There are no engineering specs, so what are they working from except observation? (212)

Although there are scenarios in which AGI appears and then relativistically simulates billions of years of its own evolution in a very short period of time (thereby becoming ASI almost instantly), Barrat’s research nudged me toward the opinion that even if we do create AGI soon, it might take much longer than we think to evolve into a more formidable form of ASI. There is also the open question of embodiment: will AGI need to inhabit some kind of physical body to grow and learn, or will it be able to do so in purely virtual environments? In either case, if humans play a supporting or central role in helping AGI develop, I don’t think it’s unreasonable to posit that emotional attachment, mutual respect, and/or symbiosis could emerge between ourselves and the machines. There’s definitely no guarantee of that, but Barrat’s determination to rebuff the sanguine singularitarianism of people like Ray Kurzweil––a figure he critiques quite adroitly––causes him to downplay the potential for AGI to transform human life in positive ways, as well as the overwhelming difficulty of creating AGI in the first place.

It seems much more likely that AGI will emerge through a mistake or experiment with artificial neural networks that mimic brain activity and organization, rather than from humanity cracking the code of intelligence and building it from scratch. Barrat finds this incredibly alarming, given his desire for a guarantee of Friendly AI. But I think his research clearly shows that Friendly AI is a pipe dream, precisely because it would require hard and fast definitions for morality as well as intelligence. As such a confluence of discoveries seems highly improbable if not impossible, it seems we are left with two choices: forswear the pursuit of AI altogether (relinquishment), or keep trying and do our best to play nice with it if and when AI arrives. This is really a false choice, Barrat admits, because human curiosity is too powerful, and technology too widely disseminated, for legislative limits on AI development to be truly effective.

If AGI and ASI can be created, they will be––if not soon, then eventually. Just like human intelligence, AI will be at least partially inscrutable: “They’ll use ordinary programming and black box tools like genetic algorithms and neural networks. Add to that the sheer complexity of cognitive architectures and you get an unknowability that will not be incidental but fundamental to AGI systems” (230). Fearing this fundamental “unknowability” by default, as Barrat would have us do, doesn’t seem right to me. It’s too similar to fearing death because it represents a horizon beyond which we cannot see. There are plenty of good reasons for caution, but trying to stop technological progress has always been a fool’s errand. Fearing the unknown can be an effective survival strategy, but conservatism alone can’t satisfy a species imbued with an ineluctable longing for discovery, for turning what ifs into can dos.

Instead of worrying overmuch about the dangers of unknown entities, it seems more intelligent to focus on what we can understand: human error, corruption, and malice. Barrat points out two of the most dangerous problems with AI research, both of which are actionable right now if populations and governments decide to take the them seriously. These issues are (1) the considerable extent to which AI is funded my military organizations (DARPA being the most proactive), and (2) the real and growing threat of cybercrime. These are both issues that, unlike speculation about how AI may or may not regard human interests, are already under human control. If DARPA decides to actively militarize AI, the resulting threats will be unequivocally our fault, not that of the machines we design to do our dirty work. And if humans continue to perfect and deploy malware and other programs that could be used to destroy financial markets and/or power grids, we’ll have nothing to blame beyond good, old-fashioned human truculence. Unfortunately, I don’t think such trends are inherently tractable. Still, getting these problems under control seems a better goal than trying to anticipate or manipulate the motivations of entities that haven’t been created yet and possibly never will be.

Finally, I should expose a point of profound philosophical disagreement between myself and Barrat, who takes for granted that AI-caused human extinction is a worst case scenario. On the contrary, I believe there are far less desirable futures than one in which humans give birth to a higher intelligence and then fade into the background or become forever lost to cosmic history. I’m just not resolute enough about the value of humanity qua humanity to think we ought to stick around forever, especially if our passing is accompanied by the ascendance of something new and possibly wondrous that we can’t hope to understand. AGI and ASI could signify the birth of what Ted Chu has called the “Cosmic Being,” a development that wouldn’t bother me much, even at the expense of everything humanity holds dear. I’d of course prefer peaceful coexistence and/or coevolution, but if that’s not in the cards, so it goes.

Our Final Invention addresses an undeniably important subject that most people haven’t thought through with much rigor or realism. Barrat’s overall perspective doesn’t resonate with me, but is nevertheless a valuable contribution to the 21st-century discussion about what humanity is good for and where we are going.

Rating: 7/10