You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

e-Salvation

What Technology Wants
By Kevin Kelly
(Viking, 406 pp., $27.95)
 

Kevin Kelly, the éminence grise of Silicon Valley, holds the odd job title of “senior maverick” at Wired magazine, enjoying a cult following among thousands of geeks around the globe. Having previously written about biology, the global economy, and Asia, Kelly spent the last six years thinking about the wants and needs of technology, reading “almost every book on the philosophy and theory of technology” and interviewing “many of the wisest people pondering the nature of this force.” (As his bibliography attests, most of these “wisest people” are inventors, investors, and a handful of scientists.) Kelly published his resulting ideas in real time on one of his eight blogs, posing provocative questions to his readers and receiving hundreds of responses. He did produce a printed book in the end, but, as he warned in a recent blog post, What Technology Wants may be his last experience with “paper-native” books.

How can technology want anything? Kelly’s provocation is not as kooky as it seems. He does not claim that human-made artifacts—spoons, fax machines, iPads—have wants in the same way that human beings have wants. He argues, less crudely, that once we add all of these artifacts together, they acquire collective properties that may not be present in the artifacts themselves. Just like most of us tacitly accept the fact that markets may “want” things that are not wanted by any of the market participants, we should also entertain the possibility that Technology with a capital “T” may have wants that are not present in individual technologies. Kelly believes that “Technology” gives rise to a “network of self-reinforcing processes,” and is shot through with feedback loops, and exhibits a considerable degree of autonomy that is not present on the micro-level of individual technologies. To describe the macro level, the mechanized and electronic sphere of being composed of the entirety of technology, Kelly coins a new word: “the technium.” The technium is “the accumulation of stuff, of lore, of practices, of traditions, and of choices that allow an individual human to generate and participate in a greater number of ideas.”

Armed with his theory of the technium, Kelly eagerly applies it to the explanation of the social world. Taking stock of human development since the Big Bang, he concludes that the technium has played a crucial role in “domesticating” humans, making our lives longer and more pleasant. There have been downsides, too—but overall life has been getting better, and “so far the gains from this ever-enlarging technium outweigh the alternative of no machine at all.” Mankind should learn to trust the technium, and think carefully about its future forms and expressions, and avoid erecting too many obstacles in its way, not least because humans are incapable of grasping the technium’s numerous interconnections and manifestations. It is bigger than us.

It is impossible to assess the originality of Kelly’s argument without first assessing the originality of his concept of the “technium.” Kelly had other terms at his disposal—technology, industry, culture—and he rejected them all in favor of what he regards as a more ambitious and comprehensive term. Kelly has also explicitly rejected the German term Technik and the French technique (which have been previously deployed by many European critics of technology), arguing that these concepts do not capture “the essential quality of the technium”: that it is “a self-reinforcing system of creation.” Unfortunately, in his quest to claim new territory, Kelly runs roughshod over the history of social thought about technology. The original German term Technik does indeed capture most of what Kelly has to express. The only theoretical innovation that Kelly adds to the concept of Technik is his eccentric attempt to link it to evolutionary biology—a move that eviscerates the “technium” of much of its critical value.

A digression into the linguistic history of technology and Technik is in order. (This history was masterfully retold by Eric Schatzberg in a couple of academic papers that Kelly seems to have missed.) For much of the nineteenth century, the English word “technology,” just like the French and German “technologie,” denoted a branch of knowledge—a science—that studied industrial arts and crafts; “technology” did not refer to those arts and crafts themselves, as it does today. Technology was much like chemistry: it was a field of study, not its object. It was in nineteenth-century Germany, which was undergoing massive industrialization, that intellectuals and engineers alike began using another term—Technik—to describe all the arts of material production, conceived now as a coherent whole. Technik was increasingly invoked in opposition to Kultur, with many German humanist intellectuals of the time being highly critical of the growing mechanization and dehumanization that pervaded the industrialized society.

In the early years of the twentieth century, the German debate about Technik made its way into America, when Thorstein Veblen discovered some of the key German texts and incorporated them into his own thought. But Veblen chose to translate the German Technik as “technology,” most likely because by that time the English word technique,” the more obvious rendering, had already acquired its modern meaning. To his credit, Veblen’s “technology” preserved most of the critical dimensions of Technik as used by German thinkers; and he masterfully located it within contemporary debates about capitalism and technocracy. Other American intellectuals, while following Veblen in using “technology” to mean Technik, soon dropped this critical dimension, settling on a more politically correct and progress-friendly meaning of “technology.” When, in 1926, Charles Beard famously proclaimed that “technology marches in seven-league boots from one ruthless, revolutionary conquest to another,” he gave the term “technology” its modern meaning, severing Veblen’s connection to the critical theories of Georg Simmel and Werner Sombart. At the same time, the Technik vs. Kultur debate in Europe continued, with Martin Heidegger, Ernst Jünger, Hans Jonas, and Jacques Ellul producing penetrating critiques of how Western society had become dominated by Technik/technique and was therefore losing its moral bearings. Most of these thinkers posited the growing autonomy of technology—including the self-reinforcing behavior of the system that Kelly emphasizes—and they found this prospect terrifying.

Kelly does not mention any of this, which makes his thesis about the autonomy of technology look more original that it actually is. Had the English language pulled the same trick on “Technik/technology as it did on “economy/economics,” there would be far less analytical confusion about “what technology wants.” As Kelly concedes in his bibliography, the idea of autonomous technology is very old, and its influence on social thought was already documented in 1977 in Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought by Langdon Winner. Kelly’s book is deeply derivative of Winner’s work: it’s not only that Kelly steals some catchy phrases (like the odd quotation from Valéry), but that he also borrows entire arguments. One of his central arguments to prove the autonomy of technology—that many inventions are discovered simultaneously—was first used by Jacques Ellul, whose ideas occupy the lion’s share of Winner’s book. Coining a buzzword does not establish one’s intellectual originality.

In dismissing existing theories of technology as inadequate, Kelly wishes to recast the Technik vs. Kultur debate in biological terms that would be familiar to readers of Richard Dawkins, Susan Blackmore, and Steven Pinker. He leaves few doubts about his views on human nature: “humans are the reproductive organs of technology”; what we do is “multiply manufactured artifacts and spread ideas and memes.” Not surprisingly, such a simplistic conception of human nature pushes Kelly to come up with a very shallow answer to the question posed in the title of his book: technology wants the same things as evolution. On Kelly’s account, it wants to become more complex and more diverse as well as to promote progress.

 

His years of employment as a senior maverick has induced a severe form of the Maverick Complex in Kelly: he exhibits an annoying habit of identifying contrarian threads in modern physics and biology, proclaiming them to be true—unlike all those “orthodox” theories dominating modern science—and then using them as unassailable building blocks for his highly speculative theories about technology. At times his theory of technology reads simply like an overheated congeries of hot new ideas that are just too mavericky to fit into other disciplines. The number of scientists who believe that there is a direction in which evolution is moving, that this direction is “progressive” and that it also breeds diversity and complexity along the way, is smaller than Kelly lets on. It is puzzling why he chose to make his theory of technology highly contingent on new discoveries in evolutionary theory. Pending such discoveries, Kelly’s theory of technology is of little use, for his account of evolution—based on the recent thought of several renegade scientists—would not survive peer review in most serious academic journals.

But even if modern science is wrong and biological evolution does have a bias towards progress, complexity, and diversity, it still does not follow that “with minor differences, the evolution of the technium—the organism of ideas—mimics the evolution of genetic organisms.” To establish that such a similarity exists, Kelly needs to show that the evolution of technology contains structural counterparts to the widely accepted individual elements of the Darwinian scheme of biological evolution: variation, selection, and retention. But instead of operating with attributes of evolution that are agreed on by even the most renegade scientists, Kelly seeks to establish the similarity between the two kinds of evolution by reference to attributes—complexity, diversity, progress—that are not widely accepted. Simply put, if the evolution of the technium is just an extension of biological evolution, then the former should also retain the most distinctive features of the latter: the evolution of particular technologies should be subject to the same rules that guide the evolution of species. But this is not the case: while the fate of some technologies can be studied by reference to natural selection or variation, there are usually far better and more rigorous ways to explain what happened.

Consider the QWERTY keyboard layout. Is it the best that mankind can deliver? Clearly not: compared with QWERTY, the alternative Dvorak layout uses less finger motion, increases typing rate, and reduces errors. The reason why QWERTY triumphed over Dvorak has nothing to do with fitness: it was that it appeared on the market earlier and had more influential backers. By the time the Dvorak option was available, many organizations were already using QWERTY and feared that retraining their staff would be too expensive. To understand why an inferior technology such as QWERTY prevailed over Dvorak, then, we need look to economics, not to evolutionary biology. But for Kelly none of this matters: he requires his evolutionary metaphors to explain only why modern keyboards have so many buttons and blinking lights (“complexity”), not to tell us why some keyboard layouts prevail over others (“natural selection”). The complexity angle is certainly interesting—but why does it require the theory of evolution? Except, of course, that these days almost everything does. Kelly’s flirtation with evolutionary biology is an unsuccessful attempt to confer scientific prestige upon his thoughts about technology.

 

There is barely any mention of politics in Kelly’s book, but its prescriptive agenda is deeply political, and it is firmly rooted in conservative thought. From Hayek, Kelly borrows his key insight that the technium is the product of spontaneous order, which has emerged thanks to a long evolutionary process that humans cannot understand or direct. Not surprisingly, Hayek, too, championed the idea that biological and cultural evolution are alike, proclaiming that “the theory of evolution of traditions and habits which made the formation of spontaneous orders possible stands ... in a close relation to the theory of evolution of the particular kinds of spontaneous orders which we call organisms.”

The problem with applying such thinking to technology (and to culture in general) is that most technological institutions and phenomena are simply too recent to have been subject to the evolutionary test. It is problematic to assume that any institutions that have survived such a test make a “useful” contribution to society, the way they would if they survived in an organism. Their survival may be due to them having more powerful backers rather than to their always leading to socially optimal outcomes (as one would expect in biological evolution). An overzealous application of evolutionary theories to explain social institutions is likely to lead to a deeply conservative outlook, whereby all existing institutions are presumed to be optimal and their functions socially beneficial. Thus, when Kelly writes that “the nature of technology is inherently prolife,” this complacent bit of vitalist rhetoric is just an attempt to justify existing social and political structures by invoking the language of biology and evolution. Likewise, to claim that “the technium wants what evolution began” is to write off the externalities of the technium’s ever-expanding desires—pollution or global warming—as merely the natural consequences of the laws of physics and biology.

The moment one adopts the view of technology as a “super-organism,” it is impossible not to arrive at the deeply conservative prescription that this super-organism should better be left alone and not tinkered with. “Forecasting consequences in a technium where millions of new ideas are introduced each year becomes mathematically intractable,” Kelly remarks. For him, this seems like a good enough argument not to intervene—at least until we know more about the full spectrum of events caused by a technology. But Kelly is not content with his neo-Hayekian pledge to leave the technium to itself for the practical reason that we do not know how to run it. He wants also to argue that this is the right thing to do, since leaving technology to itself would also give us more choices—and who would not want more choices? For Kelly, the multiplication of choices is the summum bonum, and the natural metric to measure progress, because the expansion of choices is what drives the cosmos itself. “While we amass possibilities, we do so because the very cosmos itself is on a similar expansion”: this is not merely an egregious scientism; it is also a kind of New Age obscurantism.

Kelly frequently uses arcane references to “the cosmos” in order to justify significant modifications to our social and political systems that would better suit the wants of the technium, even suggesting that “we can choose to modify our legal and political and economic assumptions to meet the ordained trajectories ahead.” We are in Huxleyan (Aldous, not T.H.) territory here. But how to remedy the situation if the trajectories ahead prove to be something other than ordained? Is Facebook’s attempt to redefine the notions of public and private really an “ordained trajectory” that must be accepted gratefully as a technological law of history that requires us to change our legal assumptions to meet it? Or is it just a corporation’s attempt to maximize its profit, with little or no regard for how its actions affect public life? Kelly’s high-flying theories often sound like shilling for the relevant businesses. The people whom Kelly lists as his friends are the ones who stand to benefit the most from his uncritical and laissez-faire approach to technology: all those “folks inventing supercomputers, genetic pharmaceuticals, search engines, nanotechnology, fiber-optic communications.”

Kelly goes as far as to define technology itself as “that which increases options,” in the belief that this shows that technology is, on the whole, a good thing. In reality, we also need to examine whose options are being boosted, and why. Consider the case of Kenya, where, thanks to mobile phones, policemen can now accept bribes in virtual currency rather than in cash notes, making it harder for anti-corruption authorities to catch them. Technology has indeed given these corrupt policemen more choices—but this alone tells us little about technology’s impact on the welfare of Kenyan society, and suggests that machines are neither good nor bad, but morally multivalent. To grasp the full impact of technology upon a particular society, we need to start with a theory of society and politics that has a few more variables than the number of consumer choices. And it is impossible to do without first developing some rudimentary theory of values—and no such theory is possible with “complexity” and “diversity” as its only inputs, its only goods.

The large problem with Kelly’s definition is that it simply does not help to deal with most of the actual technological problems facing us today. In some sense, one could always make the argument that nuclear weapons, unmanned drones, or cognitive enhancement drugs offer us “more choices”—but that is probably the least interesting thing to say about those technologies. Kelly’s attempt to limit the debate about the social and political impact of technology only to the kind of choices that they make possible stands in the way of cogent social criticism, and of clear deliberation about sound public policy. It may provide a framework for thinking about the ever-expanding choice in electronic toothbrushes and coffee machines, but that is all.

 

Kelly makes no effort to define what technology is not. His concept of the “technium” subsumes both ideology and culture, becoming some kind of a mega-concept in social theory—one that can explain everything and say nothing. Kelly posits that “a military laser and Gandhi’s act of civil disobedience are both useful works of human imagination and thus both technological”—but he never makes it clear what it is that we gain by lumping the two together. He likewise posits that “a Shakespeare sonnet and a Bach fugue ... are in the same category as Google’s search engine and the iPod.” He is an enthusiast, a holist, an eraser of important distinctions. As tempting as it might be to place technology at the center of a sweeping narrative about the history of the world, surely any account that treats ideas as “technologies” because they are products of the human mind tells us very little about ideas or technologies.

Is it really true that the “cyclotron of social betterment is propelled by technology,” and that “each rise in social organization throughout history was driven by an insertion of a new technology”? One could argue, I suppose, that the values of “liberty, equality, fraternity” were advanced by the technology of the guillotine—but this seems to be a very minor lesson to draw from the French Revolution. If, on the other hand, we were to treat the values of “liberty, equality, fraternity” as “technologies”—after all, they were thought up by a human mind—it is not clear what we gain by applying the same term to describe the guillotine. Kelly demonstrates by unwitting example why broadening the definition of technology is a bad idea: the technium does not reveal anything new about society, while it deprives us of the vocabulary in which to converse meaningfully about the spoons and the iPads.

What exactly is gained by assuming that the “overall diminishment [of slavery] globally is due to the technological tools of communication, law, and education”? To assume that technology is like law and law is like education is to assume that human agents no longer need to choose between technology and education. But this is rarely the case: the decision whether to invest in cheap laptops or cheap teachers is very real and faces most developing countries around the globe. That the technology of “teaching” has had a positive effect on learning outcomes may not justify putting more laptops— another set of “technologies”—into the classroom, especially when resources are limited. Often Kelly uses his over-expansive definition of technology to establish its positive credentials—when human rights treaties are presumed to be “technologies,” the moral glamour is easy to establish—but then he conveniently switches to a more conventional gadgetsonly definition of technology when he enters into his prescriptive mode. His argument is something akin to: human rights, therefore full-body scanners.

Things do not follow as Kelly thinks they do. And the situation turns really comical once Kelly’s refusal to engage with the rich literature on the philosophy and sociology of technology pushes him to conclude that the person offering the sharpest critique of technology in our time is the Unabomber. Kelly praises his writings for exhibiting “surprising clarity” and dedicates a long and tedious chapter to his opinions, all this only to conclude that we have already embraced technology and we like it—so there is no point debating whether we need to embrace it. That Kelly chose the writings of a deranged and murderous individual living in a hut as a starting point for his treatment of the philosophy of technology is quite symptomatic of his cheap penchant for the startling and the unexpected. Remarkably, the only major conclusion that Kelly draws from the Unabomber’s chapter is that “civilization has its problems, but in almost every way it is better than the Unabomber’s shack.” How many people believe otherwise?

Invariably, the result of Kelly’s theorizing is that deeply political and value-laden questions of power and ideology are recast in the apolitical language of technology. In some sense, Kelly’s theory suffers from the same problem that Marxist critics long ago identified in Jacques Ellul’s work on the autonomy of technology: it exonerates capitalism, and absolves powerful political and economic structures from the scrutiny they deserve. But there is also a crucial difference between Ellul and Kelly: the former was on a quest to recast the debate about the technological society in moral terms, while the latter is entirely satisfied with the technological vocabulary and seeks only to expand it with a few terms from biology. Ellul’s theory did not resolve any of the major controversies surrounding modern technology, but it was useful in highlighting the ethical void in the technological morality that was gradually replacing natural morality. Ellul did not challenge the fact that technology was making our life more comfortable and longer, but he thought that the price we pay for it was higher than commonly assumed. “We cannot believe that Technique brings us nothing; but we must not think that what it brings it brings free of charge,” he wrote in 1962.

Kelly’s project, by contrast, seeks to deepen the moral void—and to establish its normative character by claiming that it is propelled by the same forces as evolution. But can evolution really explain the plight of child laborers mining for cobalt—a key ingredient in batteries for mobile phones—in the Democratic Republic of Congo or Zambia? (According to a 2007 study by SwedWatch, a Swedish watchdog, there were some fifty thousand workers under the age of eighteen involved in this practice.) Is exploiting minors for cobalt mining something that technology wants, or is it something that certain businesses, here disguised under the innocent label of the “technium,” require? To claim that such processes follow the normal direction of evolution is to let the mining corporations off the hook far too easily.

To his credit, Kelly is aware that his analysis is completely lacking in any moral considerations. He explains this by claiming that issues of morality and values are not measurable, while “anything we can quantify has been getting better.” Leaving aside the questionable validity of this last claim—we can measure income inequality and it is not getting better in many developed countries—it is telling that Kelly so easily dismisses critiques of progress that derive from the realm of morality and philosophy. “Choices without values yield little, this is true; but values without choices are equally dry,” he observes in a typical passage. That the availability of more choices brought on by new technologies may be transforming old values and imposing new and less desirable values never bothers Kelly, or even occurs to him. When he writes of “lousy technologies,” what he means by “lousy” are technologies that are not yet efficient. But such thinking quickly runs into a moral abyss: it is not at all obvious that the answer to a lousy sex robot is to produce a more effective sex robot. A society cannot get very far in its regulation of technology by looking only at efficiencies and choices. Traveling through all that hard and immeasurable moral territory may not be to Kelly’s liking, but still it needs to be done—and it requires a theory of technology that goes beyond an Ayn Rand-like fascination with expanding our choices.

The main reason why Kelly wrote What Technology Wants became clear to me only after I looked at his review of his own book, which was conveniently published on one of his blogs:

Taken together these giga-trends inform the development of technology investment and the choice technological expressions today. These “wants” of technology provide a long-horizon framework for business—your business. I’ll be doing as many talks at companies and organizations about “what technology wants” as I can in the coming months.

Kelly is not the first technology guru to make a living by selling advice to corporations. But it is hard to imagine the previous generation of serious thinkers about technology—the likes of Jacques Ellul and Lewis Mumford and John Dewey—moonlighting as corporate advisers to Danone and Halliburton. In contrast, most of today’s technology gurus-from Kevin Kelly to Clay Shirky to Douglas Rushkoff—take special pride in publicizing how deeply embedded they are in the very industry that they are supposed to scrutinize. Perhaps this is what technology wants.
 

Evgeny Morozov is a visiting scholar at Stanford University. He is the author, most recently, of The Net Delusion: The Dark Side of Internet Freedom (PublicAffairs). This article originally ran in the March 24, 2011, issue of the magazine.

For more TNR, become a fan on Facebook and follow us on Twitter.