Review: Yuval Noah Harari’s “Homo Deus”

by Miles Raymer


“Who could heed the words of Charlie Darwin
Fighting for a system built to fail
Spooning water from their broken vessels
As far as I can see there is no land”

So sings Ben Knox Miller in The Low Anthem’s “Charlie Darwin,” one of the best songs I discovered during my college years. The track is a mournful yet courageous confrontation of the most difficult truths the modern world has revealed: that human life is neither inherently meaningful nor special; that we are bound and determined by the same natural laws that control the rest of the universe; and that the many stories we tell ourselves to make life bearable are at best useful fictions, at worst destructive falsehoods.

Yuval Noah Harari’s Homo Deus is the perfect intellectual companion to this arresting song. Harari’s previous book––Sapiens––was my favorite piece of nonfiction from 2015, and this followup may prove to be my favorite from 2017. Although I think some of his views are flawed or in need of revision, I count myself an official Harari acolyte. Still, I will do my best to provide a balanced review of this exceptionally interesting and informative book.

The purpose of Homo Deus is to acknowledge that humanity is at a crossroads. Although the choice before us will assume infinite permutations as history plays out, it can be summarized thus: reject the narratives of the past, or perish. If we take a hard look at the swift transitions––economic, political, ecological, technological, social––that are already upon us, we will realize that equally swift adaptation is the only way to survive. This leaves us with an important couple of questions: What will human life look like if we manage to carry on? Will that life even be human at all?

Homo Deus takes up these questions, and many more, with the succinct and witty style familiar to anyone who has read Harari before or listened to him interviewed. His ability to cut through the noise and get to the heart of matters most critical to the modern moment is unparalleled in my experience, and his message––though deeply troubling in some ways––is important and profound. Picking up where Sapiens left off, Harari turns to the future, trying to grasp which historical lessons might continue to hold water in the coming decades, and which will become (or have already become) entirely obsolete. He takes aim at two cherished and interrelated ideas on which the foundations of contemporary societies rest: humanism and liberalism. Both of these “broken vessels” take a brutal beating in this book––one that may prove salutary to those willing to dispense with old, romantic ideas and embrace new, practical ones.

Harari posits that humanism has spent the last few centuries conquering the world. Humanism was the first “religion” (his word) to send humanity looking inward for answers to life’s big questions, rather than appealing to a religious or social authority. Out of this process arose three basic goals: life extension, increased happiness, and power to reshape the world and ourselves according to desire. Although the pursuit of these goals has led to incredible improvements in quality of life for most humans (especially by historical standards), Harari is dubious in his assessment of humanism’s future:

The rise of humanism also contains the seeds of its downfall. While the attempt to upgrade humans into gods takes humanism to its logical conclusion, it simultaneously exposes humanism’s inherent flaws…The same technologies that can upgrade humans into gods might also make humans irrelevant…Once we come nearer to achieving these goals the resulting upheavals are likely to deflect us towards entirely different destinations. (66)

Throughout the text, Harari provides exhaustive support for how and why individual human beings are daily becoming less valuable in the race to realize the promises of humanism. Technological automation has begun to supersede our status as the primary contributors to economic production and our usefulness as participants in war. Social algorithms are coming ever-closer to knowing us better than we could ever hope to know ourselves, at least in terms of behavioral prediction.

These developments are also undermining the basic tenets of liberalism, which teaches that each human is a distinct individual who can (and should) choose freely how he/she wants to live, and suggests that superior societies and markets will arise from everyone choosing what is best for him- or herself. Yet, Harari rightfully points out that “these factual statements just don’t stand up to rigorous scientific scrutiny” (283). Credible support for strict individuality and free will is nowhere to be found in empirical observations, and there is no conclusive evidence to prove that people making choices according to their perceived self-interest necessarily leads to better or happier societies. On the contrary, the facts show that our conscious ideas about what constitutes our self-interest are deeply flawed, which turns out not to matter much since those ideas are just one of many factors that determine our behavior, most of which are unconscious and far more influential than consciously-held convictions:

If by ‘free will’ we mean the ability to act according to our desires––then yes, humans have free will, and so do chimpanzees, dogs and parrots. When Polly wants a cracker, Polly eats a cracker. But the million-dollar question is not whether parrots and humans can act upon their inner desires––the question is whether they can choose their desires in the first place. Why does Polly want a cracker rather than a cucumber? Why am I so eager to kill my annoying neighbor instead of turning the other cheek? Why do I want to buy the red car rather than the black? Why do I prefer voting for the Conservatives rather than the Labour Party? I don’t choose any of these wishes. I feel a particular wish welling up within me because this is the feeling created by biochemical processes in my brain. These processes might be deterministic or random, but not free. (285-6, emphasis his)

So we can cling to the idea that we exercise free will by choosing from a host of options we had no say in constructing, but even that intellectual backflip won’t necessarily mitigate liberalism’s plunge to obsolescence. Harari asserts that even if it is not what we most value about ourselves, a radical interpretation of self-determination is the original justification for the creation of representative democracies with protections for individual freedoms. But if self-determination is a fiction, then the rational justification for the role of the state in preserving individual freedoms falls away.

Liberal defenders may retreat to arguments about how liberal democracy is the best form of government available to us––an acceptable position for those willing to ignore the successes of contemporary China. Still, there is no guarantee of liberal democracy’s place in humanity’s future. Harari gives the lie to the notion that liberal democracies became popular solely because they were superior systems of governance. Rather, liberal democracy was in the right place at the right time; it met the economic and wartime needs of industrialized societies in the 19th and 20th centuries:

Liberalism did not become the dominant ideology simply because its philosophical arguments were the most valid. Rather, liberalism succeeded because there was abundant political, economic and military sense in ascribing value to every human being. On the mass battlefields of modern industrial wars and in the mass production lines of modern industrial economies, every human counted. There was value to every pair of hands that could hold a rifle or pull a lever. (309-10)

It is abundantly clear that this paradigm is no longer valid. We are rushing toward a future where small numbers of technologically-advanced (or -augmented) groups can run global economies, acquire and utilize resources, and wage war with relatively little human input. Where, then, is the incentive for future governments to improve the lives of average citizens? There is none, except perhaps a moral imperative to take care of humans because human life is inherently special or valuable––another nice idea with no scientific or logical standing! (I do not actually think this is a complete picture, as addressed later in the review.)

The point here is not that humanist and liberal values are not at all meaningful or influential. These ideas have played a significant role in bringing us to the present moment. The point is that, when considering the broad arcs of natural and human history, humanism and liberalism are minor league players with far less clout than more fundamental forces, such as resource scarcity, social power dynamics, physics and biology. These forces may be and have been supported, redirected or obfuscated by imagined orders such as humanism and liberalism, but natural laws always prevail when the rubber meets the road. Harari urges us to consider that the rubber and the road have already met; in fact, the future looks like a car that has been spinning its wheels for some time but is only just starting to take off. As the car picks up speed, humanity as we know it will either vanish or become completely irrelevant; the car will not wait for our philosophical convictions to catch up.

Even if we agree with his general outlook (which I do), we do well to consider places where Harari oversimplifies or missteps. Semantically, he has an annoying habit of conflating religious and ethical language. While he may not see an important distinction between these two terms, I think it is critical to distinguish between value systems that appeal to supernatural forces (such as Christianity and Islam) and ones that do not (such as humanism). All human value systems contain fictive assertions (e.g. God lives in Heaven, humans have a soul, human life is sacred, etc.), but religions go one step further by positing a controller of the universe who cares about how humans ought to live (Buddhism is not a religion by this standard). This is a much more egregious misinterpretation of reality than the idea that there is something intrinsically special about humans. As a matter of degree, religions flee from scientific truth faster than most other value systems, so keeping a firm barrier between religion and ethics would be more appropriate than Harari’s tendency to use them interchangeably. Calling humanism a “religion” just muddies the waters and ignores the fact that the advent of humanist ethics signified an improvement from the religious systems that preceded it (Harari readily admits this throughout the text, but his language doesn’t always bear it out).

Another problem arises in Harari’s discussion of consciousness, which I generally found engaging and well-researched, especially for a nonscientist. Harari posits that intelligence is in the process of decoupling from consciousness, which is accurate; we are creating artificially intelligent entities that appear to be nonconscious but can nevertheless accomplish all sorts of things we used to think only humans would ever be able to do. But Harari jumps the gun by assuming that intelligence will necessarily and permanently decouple from consciousness. One of his central points throughout the text is that organisms are algorithms and “there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass” (323). I have no qualms with this viewpoint, but am surprised that Harari didn’t seem to realize that accepting this position means we also have to accept the possibility that sufficiently advanced AI may very well develop consciousness(es) of its own, regardless of any intention to do so. If non-organic algorithms can do anything we can, then consciousness is on the table. That doesn’t mean AI will necessarily become conscious, or that a conscious AI will be able to communicate its conscious status to humans, but it does mean we need to be cautious about thinking that we can create increasingly-intelligent AIs without ever causing them to suffer (suffering requires some form of consciousness, as far as we know). It is too soon, therefore, to treat the decoupling of intelligence from consciousness as inevitable and absolute.

Harari doesn’t give climate change enough attention. Any contemporary book about the future ought to contain extensive explanations of how climate change may or may not affect humanity’s near- and long-term prospects. If Harari is right that “The real nemesis of the modern economy is ecological collapse,” and if the possible futures he is predicting depend on the continued development of modern economies, then climate change deserves a much more robust hearing than the eleven pages cited in its index entry (214, 439). Harari may think we can continue with business as usual and engineer our way out of disasters as they arise (no easy feat), but meanwhile says nothing about how new technologies and/or systems of political and social organization might help us analyze, mitigate, or even prevent climate catastrophes. This is especially frustrating given that climate change is a hypercomplex problem and should be high on the list of desirable assignments for current and future AIs with greater-than-human intelligence.

Finally, Harari seems comfortable predicting that small groups of uber-privileged superhumans won’t be motivated to respond to the needs of common people. While it may be true that normal humans will lose their power as generators of economic value, and also that modern military and surveillance technology will excel at stamping out nascent revolutionary movements, regular folks will probably retain their status as the primary consumers of economic output (at least until robots start planning vacations to Mars). A fully-automated global economy might run out of steam fast if no one has any money to buy its products. This dynamic may lead to societies implementing a basic universal income or other form of subsistence living program to support the consumptive needs and preferences of the general population. Given that most digital technologies can be reproduced at zero marginal cost (or close to it), 21st-century humans could enjoy significant and universal increases in quality of life, even if those increases are unequally distributed. The pleasures enjoyed by Homo Deus might exceed the wildest dreams of previous generations, but that’s less disconcerting if life for good ol’ Homo Sapiens also gets better.

Long-winded as it is, this review only scratches the surface of the plethora of great ideas offered up in Homo Deus. Despite some flaws, Harari has done much more than his fair share of intellectual lifting for his generation. In the end, Harari suggests two basic models for the future: become Homo Deus (superhumans augmented by technology), or prepare to hand the baton over to smart machines and let them run the show. I’m hoping we can do both. I no longer trust humans to care properly for Earth, and am fine with allowing a superintelligent AI to have a crack at it if we can produce one before we exhaust available resources. I’m excited to wait in line for my copy of the new “happiness algorithm” or get a chance to upgrade to whatever comes next (although I am skeptical I will ever be able to afford it!). Maybe we can even wall off a nice patch of green somewhere for the new Amish––those who won’t heed the words of Charlie Darwin. I’m not holding my breath, though. Even the sunniest forecasts contain some pretty grim days ahead.

It seems like this book should leave me feeling a sense of existential dread or hopelessness, but actually I find it very liberating. It’s a lot of pressure to think you’re the smartest ape around, and even more arrogant to think we are somehow in charge of Earth and should be held to account if it is destroyed. I for one am comforted to think humans will soon have even less control over the planet than we do now.

I recently dug out my old DVD of The Matrix, and was surprised by my reaction to watching Neo and his compatriots fight to escape a perfectly decent “prison of the mind” constructed for them by machines––an arrangement which seems to me now a beautiful example of creative symbiosis. They kill a lot of innocent people and seek to destroy an entire race of intelligent machines, all so they can “live free” in a dreary subterranean shit-hole, with no practical means of supporting the millions of humans they are ostensibly trying to liberate. Despite his creepy demeanor and terrible goatee, I found myself cheering for Cypher this time around, and nodding in agreement when Agent Smith referred to Morpheus as a “known terrorist.” Looking over Neo’s shoulder at the human power plant, so many minds drifting in blissful ignorance, I couldn’t help but think:

We should be so lucky…

Rating: 10/10