Moralizing Technology: Understanding and Designing the Morality of Things
By Peter-Paul Verbeek
(University of Chicago Press, 183 pp., $25)
JUST WEST OF SEOUL, on a man-made island in the Yellow Sea, a city is rising. Slated for completion by 2015, Songdo has been meticulously planned by engineers and architects and lavishly financed by money from the American real estate company Gale International and the investment bank Morgan Stanley. According to the head of Cisco Systems, which has partnered with Gale International to supply the telecommunications infrastructure, Songdo will “run on information.” It will be the world’s first “smart city.”
The city of Songdo claims intelligence not from its inhabitants, but from the millions of wireless sensors and microcomputers embedded in surfaces and objects throughout the metropolis. “Smart” appliances installed in every home send a constant stream of data to the city’s “smart grid” that monitors energy use. Radio frequency ID tags on every car send signals to sensors in the road that measure traffic flow; cameras on every street scrutinize people’s movements so the city’s street lights can be adjusted to suit pedestrian traffic flow. Information flows to the city’s “control hub” that assesses everything from the weather (to prepare for peak energy use) to the precise number of people congregating on a particular corner.
Songdo will also feature “TelePresence,” the Cisco-designed system that will place video screens in every home, office, and on city streets so residents can make video calls to anyone at any time. “If you want to talk to your neighbors or book a table at a restaurant you can do it via TelePresence,” Cisco chief globalization officer Wim Elfrink told Fast Company magazine. Gale International plans to replicate Songdo across the world; another consortium of technology companies is already at work on a similar metropolis, PlanIT Valley, in Portugal.
The unstated but evident goal of these new urban planners is to run the complicated infrastructure of a city with as little human intervention as possible. In the twenty-first century, in cities such as Songdo, machine politics will have a literal meaning—our interactions with the people and objects around us will be turned into data that computers in a control hub, not flesh-and-blood politicians, will analyze.
But buried in Songdo’s millions of sensors is more than the promise of monitoring energy use or traffic flow. The city’s “Ambient Intelligence,” as it is called, is the latest iteration of a ubiquitous computing revolution many years in the making, one that hopes to include the human body among its regulated machines. More than a decade ago, Philips Electronics published a book called New Nomads, which described prototypes for wearable wireless electronics, seamlessly integrated into clothing, which would effectively turn the human body into a “body area network.” Today, researchers at M.I.T.’s Human Dynamics Lab have developed highly sensitive wearable sensors called sociometers that measure and analyze subtle communication patterns to discern what the researcher Alex Pentland calls our “honest signals,” and Affectiva, a company that grew out of M.I.T.’s Media Lab, has developed a wristband called the Q sensor that promises to monitor a person’s “emotional arousal in real-world settings.”
Now we can download numerous apps to our smartphones to track every step we take and every calorie we consume over the course of a day. Eventually, the technology will be inside of us. In Steven Levy’s book In the Plex, Google founder Larry Page remarks, “It will be included in people’s brains ... Eventually you will have the implant, where if you think about a fact it will just tell you the answer.” The much-trumpeted release of the wearable Google Goggles was merely the out-of-body beta test of this future technology.
Now that we feasibly can embed electronics in nearly any object, from cars to clothing to furniture to appliances to wristbands, and connect them via wireless signals to the World Wide Web, we have created an Internet of Things. In this world, our daily interactions with everyday objects will leave a data trail in the same way that our online activities already do; you become the person who spends three hours a day on Facebook and whose toaster knows that you like your bagel lightly browned. With the Internet of Things, we are always and often unwittingly connected to the Web, which brings clear benefits of efficiency and personalization. But we are also granting to our technologies new powers to persuade or compel us to behave in certain ways.
TECHNOLOGY IN PRACTICE is nearly always ahead of technology in theory, which is why our cultural reference points for discussing it come from science fiction rather than philosophy. We know Blade Runner and not Alfred Borgmann, HAL and not Heidegger. We could even view the city of Songdo through the lens of “The Life and Times of Multivac,” Isaac Asimov’s story, from 1975, about a supercomputer that steps in to run society smoothly after human missteps lead to disarray. But our tendency to look to fictional futurist extremes (and to reassure ourselves that we have not yet overstepped our bounds) has also fueled a stubbornly persistent fallacy: the idea that technology is neutral. Our iPhones and Facebook pages are not the problem, this reasoning goes, the problem is how we choose to use them. This is a flattering reassurance in an age as wired as our own. In this view, we remain persistently and comfortably autonomous, free to set aside our technologies and indulge in a “digital Sabbath” whenever we choose.
Yet this is no longer the case. As Peter-Paul Verbeek, a Dutch philosopher, argues, it is long past time for us to ask some difficult questions about our relationship to our machines. Technologies might not have minds or consciousness, Verbeek argues, but they are far from neutral. They “help to shape our existence and the moral decisions we take, which undeniably gives them a moral dimension.” How should we assess the moral dimensions of these material things? At a time when ever more of our daily activities are mediated by technology, how do we assign responsibility for our actions? Is behavior that is steered by technology still moral action?
Drawing on technology theorists such as Don Ihde and Bruno Latour, as well as the work of Michel Foucault, Verbeek proposes a “postphenomenological approach” that recognizes that our moral actions and the decisions we make “have become a joint affair between humans and technologies.” In this affair, human beings no longer hold the autonomous upper hand when it comes to moral agency; rather, Verbeek argues, we should replace that notion with one that recognizes “technologically mediated intentions.”
Such intentions are clear in the use of an older technology—fetal ultrasound—that has transformed our understanding and experience of the unborn child. As Verbeek notes, a technology originally devised to enhance our medical knowledge has generated unintentional but serious moral quandaries. “This technology is not simply a functional means to make visible an unborn child in the womb,” Verbeek argues. “It actively helps to shape the way the unborn child is humanly experienced.” That experience is now one of greater transparency and greater abstraction. We see the child as something distinct from its mother; the womb becomes an “environment.”
The technology fundamentally alters not only what we can see, but also the quality and the quantity of the choices we are asked to make. At a time when the vast majority of women choose to terminate Down Syndrome pregnancies, for example, even the decision not to have an ultrasound to screen for possible birth defects is staking a moral position, one that brings with it the implication that one is risking future harm to one’s child. In making these decisions, Verbeek argues, the mother is not the only autonomous actor; the technology itself “plays an active role in raising moral questions and setting the framework for answering them.”
That our machines might exert control over our moral decision-making is an unpopular idea. We like to think of ourselves as exercising autonomy over the things we create and the actions we take. Verbeek finds in our desire to cling to this notion a touching fidelity to the principles of the Enlightenment. Although he shares those principles, he no longer finds them sufficient grounds for moral thinking in an era whose technologies are as ubiquitous and powerful as our own. Verbeek wants us to “move the source of morality one place further along” from the Enlightenment’s emphasis on human reason to a system that grants equal weight to our technologies—the things, like ultrasound, that we increasingly rely on to understand ourselves and our world. Many technologists already embrace this idea: as Alex Pentland argues in Honest Signals, his book about sociometers, “We bear little resemblance to the idealized, rational beings imagined by Enlightenment philosophers. The idea that our conscious, individual thinking is the key determining factor of our behavior may come to be seen as foolish a vanity as our earlier idea that we were the center of the universe.”
THE NEED FOR A NEW “morality of things” can be seen most keenly in the field of persuasive technology. In 1997, B.J. Fogg founded a Persuasive Technology Lab at Stanford University; by 2007, he was inviting readers of one of his books to “think down the road fifteen years and imagine how this will work. Through mobile technology, insurance companies will motivate us to exercise, governments will advocate energy conservation, charities will persuade us to donate time, and suitors will win the hearts of their beloveds. Nothing can stop this revolution.”
Persuasive technologies take many forms, from cell phone apps to sophisticated pedometers, and use familiar strategies, such as simulations and positive reinforcement, to achieve their goals. The old behaviorist notion of “operant conditioning”—using positive reinforcement to change behavior—is evident in the micro-persuasion techniques used on websites such as Amazon and eBay, where reviewer ratings and consumer rankings encourage a sense of increased trustworthiness among users and where you are greeted by name on every page—all in an effort to persuade you to keep coming back.
Persuasive technology takes less subtle forms as well, such as the Banana-Rama slot machine that features two characters, a monkey and an orangutan, who serve as a kind of virtual audience for the gambler, celebrating when she wins and offering surly and impatient expressions when she stops feeding coins into the machine. And they can act as efficient surveillance devices. “HyGenius,” a device marketed to restaurants and hospitals (and in use in many McDonalds and the MGM Grand Casinos), is placed in bathrooms so that employers can monitor (via embedded sensors in employee badges) whether or not their workers are properly washing their hands.
Arguably, our technological persuaders are better than people because they are devilishly persistent, can manage large volumes of information, and have long memories. One writer for Boston magazine who used her smartphone to help her lose weight and meet other life goals noted that, after downloading a handful of apps, “my phone became a trainer, life coach, and confidant. It now knows what I eat, how I sleep, how much I spend, how much I weigh, and how many calories I burn (or don’t) at the gym each day.”
Spend any time delving into the literature on persuasive technology, however, and you will find yourself encountering the language of seduction as often as persuasion. These technologies actively try to provoke an emotional or behavioral response from us, which can be a satisfying experience for people in need of motivation and encouragement. But technologies whose stated goal is mass interpersonal persuasion also raise important questions about privacy and autonomy. You might like a digital pedometer to track your daily walk, but how would you feel if your cell phone came equipped with a sensor that could tell when you were becoming sexually aroused and send a helpful text message reminding you to wear a condom?
TO UNDERSTAND such challenges, Verbeek would have us look to the design and the engineering of technological objects themselves. Like Lawrence Lessig and his argument about the crucial role the architecture of code played in the creation of the Internet, Verbeek believes that all users of technology need to be more engaged in controlling how these technologies are designed and used. Human beings, he declares, should “coshape their technologically mediated subjectivity by styling the impacts of technological mediations.”
It is revealing that Verbeek lapses into the passive voice when the discussion moves in this direction. “Arrangements should be developed, therefore, to democratize technology development,” he writes. Well, yes. But by whom, and how? What would a democratically developed technology look like? If our experiences with privacy violations committed by companies such as Google or Facebook are any guide, individual users have very little power to “style” the impact that many technologies have on us. You cannot “coshape” an environment designed by others to prevent you from influencing it. As it becomes increasingly difficult to refuse certain technologies, you cannot even decide to opt out of these environments. How much “coshaping” can a food-service worker do when his employer issues him a badge that tracks the number of minutes he spent washing his hands after using the bathroom?
Verbeek also urges designers of these technologies to think through the intended and unintended consequences that are likely to arise from the use of their creations. “Deliberate reflection on the possible mediating roles of a technology-in-design should, therefore, be part of the moral responsibility of designers,” he argues. But the technologists who make these objects have little motivation to build in ethical safeguards or to relinquish control to users in the way that Verbeek encourages, and so far they have demonstrated little concern for the unintended consequences of their creations. Indeed, in an early book on the subject of persuasive technologies, B.J. Fogg admitted that the science of persuasive technology does not include “unintended outcomes; it focuses on the attitudes and behavior changes intended by the designers of interactive technology products.” As one participant in a Persuasive Technology Conference in Palo Alto in 2007 rightly observed, the field “has a tradition of being morally ignorant of its consequences.”
This exposes the central tension between ethicists like Verbeek and the technologists whom he wishes to influence. Verbeek wants technologists to design things with greater transparency—things that will, effectively, tell us what they intend to do before they do it. Such warnings, like the “advertisement” labels that appear across the top of the page in print magazines denoting content that is not objective, would presumably allow us to make informed judgments about our use of technologies. The problem is that the goal of the technologists is to make these technological persuaders less transparent, not more so. Indeed, these technologies are admired for their invisibility.
One contributor to The New Everyday, a book about ambient intelligence, calls them “technologies of dematerialization,” and argues, “This is surely closer to what we want: knowledge, excitement, entertainment, education, productivity, social contact—without the obtrusive prominence of the technology that delivers them.” Like Facebook’s goal of “frictionless sharing,” persuasive technologies will be most persuasive when we no longer realize we are using them. As the technology recedes into the background, so too does our willingness to question it. The less these technologies look like ordinary “goods,” the safer they are from scrutiny.
And despite Verbeek’s thoughtful arguments for why it should, the conscience of this industry will not emerge from within. The manufacturers of these technologies are uninterested in grappling with the long-term moral and psychological implications of their creations, and the academics who might be relied upon to ponder those dimensions of their work themselves spend large amounts of their time profitably employed by “industry partners.” Companies such as Philips underwrite persuasive technology conferences and fund their publications; laboratories engage in partnerships with technology companies to market the devices and software that emerge from their research. Although the field of persuasive technology is not entirely lacking in ethical moorings, the current of money flowing between industry and the academy suggests we should not put much trust in either group’s rather feeble reassurances about their intentions. We would not have embraced Ida Tarbell’s moral crusade if she had spent half her time profiting by consulting on the construction of Standard Oil’s refineries.
VERBEEK’S CONCERN about the morality of things is part of a larger debate about the limits of free will in an age when scientific and technological discoveries claim to offer new insights. We are told that our genes determine us, that our brains control us, that vestiges of our evolutionary biology mislead us. How do we define moral responsibility when neuroscientists claim that our unconscious mind is the prime mover of our behavior and software engineers remind us that their algorithms are superior to our intuition? In this environment, self-awareness is best achieved through the analysis of data. Computational calisthenics and technological wizardry, not contemplation, offer the correct path.
Technology has helped man control his natural environment. Now it is human nature—with its irrational urges and bodily demands and quixotic extremes that have challenged philosophers, theologians, and artists for centuries—which technologists confidently propose to subdue. As Stanford’s Persuasive Technology Lab website declares, “Our goal is to explain human nature clearly and map those insights onto the emerging opportunities in technology.”
Although it is often wrapped in ambitious language, these new technologies seduce by invoking something far more banal: the language of selfimprovement. This is one of the reasons we have not, as yet, engaged in serious public discussions of their likely consequences akin to those that accompanied the development of nuclear power in the previous century. Today’s technologies seem too mundane, more likely to enhance than disrupt our lives, and most of them make no radically utopian claims about how their use will transform what it means to be human. It is not the content of our character but the content of our dinner plates that interests the creators of these technologies. Their goal, as Pentland describes it, is the creation of “sensible societies” based on the rational analysis of data. This is an appealing message, as the best-seller lists stuffed with books that promise to improve selfdiscipline attest.
Our technologies help us to tame our appetites for calories or overspending by acting as a kind of external conscience. Like Ulysses lashing himself to the mast of his ship to avoid the Sirens’ song, these new programs and devices thwart our unruly desires. They do this not by bolstering self-control but by outsourcing it. Why meticulously count points on a Weight Watchers diet when you might, in the near future, simply program your “smart home” to lock down the refrigerator and pantry to prevent late night snacking? Instead of talking to colleagues about areas of your work that could stand improvement (and perhaps risking feelings of embarrassment when you come up short), simply consult the personal Performance Coach app on your smartphone that has been analyzing your conversation patterns and “behavior metrics.” It will tell you gently that you drone on too long at staff meetings.
Persuasive technologies and Ambient Intelligence also promise a world where caretaking roles can be more efficiently outsourced. Ambient intelligence developers are already at work on a system of embedded floor sensors for use in nursing homes that can detect if someone falls out of bed or wanders away unexpectedly (surely a worthy project). But what about the nursing home resident who decides, on a whim, to take a walk? Her “smart” environment might read this as a dangerous predicament rather than an understandable human impulse and alert her minders. And such tracking is not only for the elderly. Ginger.io, another MIT Media Lab spinoff, created a smartphone app that triggers an alert when it notices that you have stayed at home for several days in a row or if your usual texting activity declines; it sends a message to you or your doctor that you are exhibiting early signs of depression.
It is an obvious but rarely mentioned fact that the attribution of intelligence in these scenarios is always to the technologies and not to the people using them. Unlike us, they are likely to get “smarter” with time. Algorithmic simulacra of human empathy are the future. Ultimately, the goal of creators of Ambient Intelligence and persuasive technologies and the Internet of Things is not merely to offer context-aware, adaptive, personalized responses in real time, but to divine future needs. As one contributor to The New Everyday noted, eventually these technologies will “anticipate your desires without conscious mediation.”
This is the ultimate efficiency—having one’s needs and desires foreseen and the vicissitudes of future possible experiences controlled. Like automobile engineers redesigning the twentieth-century’s combustion engine to produce a more efficient hybrid for the twenty-first, the technologists spearheading this revolution promise a future free of the frustrating effects of earlier technologies. But where the technologists go, must the philosophers follow? Do we need to embrace a hybrid vision of autonomy to live in a morally responsible way with our technologies?
“Human beings are products of technology, just like technology is a product of human beings,” Verbeek writes. But the language of computer-aided efficiency, in which human beings are seen as products, is perhaps not the most useful for engaging with broader moral questions. The challenge for ethicists such as Verbeek is whether a society composed of “smart” cities like Songdo might also bring an increase in moral laziness and a decline in individual freedom. Freedom is a hollow promise in the absence of agency and choice.
The promoters of Ambient Intelligence and Persuasive Technologies want to identify, quantify, and track everything about ordinary experience in the hopes of improving people’s lives. But in outsourcing so many aspects of our daily lives to technology, we are making a moral choice. We are replacing human judgment with programmed algorithms that apply their own standards and norms to our behavior, usually with the goal of greater efficiency, productivity, and healthy living. It is something of a letdown to realize that while the Enlightenment’s goal was dethroning God, our post-Enlightenment technologists and philosophers are satisfied merely to get us to loosen our grips on that Big Mac.
In efficiently revealing how the human machine works, these technologies also undermine a crucial (albeit often maligned) human quality: self-deception. Self-deception is inefficient. It causes problems. It makes things messy—which is why our technologists would like us to replace it with the seemingly greater honesty of data that, once processed, promise to know us better than we know ourselves. But being human is a messy business; and exercising judgment and self-control, and learning the complicated social norms that signal acceptable behavior, are the very things that make us human. We should not need or want to rely on a Q Sensor to do it for us. The daily hypocrisies and compromises that make life bearable (if not always entirely honest) are precisely what Ambient Intelligence and persuasive technologies hope to eliminate. But in a broader sense, as the case of genetic testing has shown, the right not to know some things (like the right to forget foolish, youthful behavior that is now permanently archived on the Internet) is as crucially important (if not more so) in our age as the voracious pursuit of information and transparency.
Verbeek agrees with technologists that such self-deception is worth controlling or eliminating. In doing so he is too quick to set aside autonomy and freedom as human ideals—never achieved in full, perhaps, but nonetheless crucial for human flourishing. To be sure, Verbeek’s commendable book charts a middle ground between technophobia and techno-utopianism, and he makes a strong case for the need to view our technologies as active agents rather than neutral tools in our lives. But he never grapples fully with a broader question, one that should be the starting point for every engagement with a new technology: just because we can do something, should we?
Merely because something is possible, is it also desirable? And if it is possible, must we immediately accommodate ourselves to it? In The Forlorn Demon, Allen Tate noted, “We no longer ask, ‘Is it right?’ We ask: ‘Does it work?’” In our contemporary engagement with technology, we would do well to spend more time with the first question, even as we live ever more mediated lives relentlessly pursuing an answer to the second.
Christine Rosen is senior editor of The New Atlantis: A Journal of Technology & Society.
This article appeared in the August 2, 2012 issue of the magazine.