In the Plex: How Google Thinks, Works, and Shapes Our Lives
By Steven Levy
(Simon & Schuster, 423 pp., $26)
The Googlization of Everything (And Why We Should Worry)
By Siva Vaidhyanathan
(University of California Press, 265 pp., $26.95)
I.
For cyber-optimists and cyber-pessimists alike, the advent of Google marks off two very distinct periods in Internet history. The optimists remember the age before Google as chaotic, inefficient, and disorganized. Most search engines at the time had poor ethics (some made money by misrepresenting ads as search results) and terrible algorithms (some could not even find their parent companies online). All of that changed when two Stanford graduate students invented an ingenious way to rank Web pages based on how many other pages link to them. Other innovations spurred by Google—especially its novel platform for selling highly targeted ads—have created a new “ecosystem” (the optimists’ favorite buzzword) for producing and disseminating information. Thanks to Google, publishers of all stripes—from novice bloggers in New Delhi to media mandarins in New York—could cash in on their online popularity.
Cyber-pessimists see things quite differently. They wax nostalgic for the early days of the Web when discovery was random, and even fun. They complain that Google has destroyed the joy of serendipitous Web surfing, while its much-celebrated ecosystem is just a toxic wasteland of info-junk. Worse, it’s being constantly polluted by a contingent of “content farms” that produce trivial tidbits of information in order to receive a hefty advertising paycheck from the Googleplex. The skeptics charge that the company treats information as a commodity, trivializing the written word and seeking to turn access to knowledge into a dubious profit-center. Worst of all, Google’s sprawling technology may have created a digital panopticon, making privacy obsolete.
Both camps like to stress that Google is a unique enterprise that stands apart from the rest of Silicon Valley. The optimists do this to convince the public that the company’s motives are benign. If only we could bring ourselves to trust Google, their logic goes, its bright young engineers would deliver us the revolutionary services that we could never expect from our governments. The pessimists make a more intriguing case: for them, the company is so new, sly, and fluid, and the threats that it poses to society are so invisible, insidious, and monumental, that regulators may not yet have the proper analytical models to understand its true market and cultural power. That our anachronistic laws may be incapable of treating such a complex entity should not detract us from thwarting its ambitions.
These are not mutually exclusive positions. History is rife with examples of how benign and humanistic ideals can yield rather insidious outcomes—especially when backed by unchecked power and messianic rhetoric. The real question, then, is whether there is anything truly exceptional about Google’s principles, goals, and methods that would help it avoid this fate.
IS GOOGLE’S EXCEPTIONALISM genuine? On the surface, the answer seems self-evident. The company’s collegial working environment, its idealistic belief that corporations can make money without dirtying their hands, its quixotic quest to organize all of the world’s information, its founders’ contempt for marketing and conventional advertising— everything about the company screams, “We are special!” What normal company warns investors—on the very day of its initial public offering!—that it is willing to “forgo some short-term gains” in order to do “good things for the world”?
As Google’s ambitions multiply, however, its exceptionalism can no longer be taken for granted. Two new books shed light on this issue. Steven Levy had unrivaled access to Google’s executives, and In the Plex is a colorful journalistic account of the company’s history. Levy’s basic premise is that Google is both special and crucial, while the battle for its future is also a battle for the future of the Internet. As Levy puts it, “To understand this pioneering company and its people is to grasp our technological destiny.” What the German poet Friedrich Hebbel said of nineteenth-century Austria—that it is “a little world where the big one holds its tryouts”—also applies to Google. Siva Vaidhyanathan’s book is a far more intellectually ambitious project that seeks to document the company’s ecological footprint on the public sphere. Unlike Levy, Vaidhyanathan seeks to place Google’s meteoric rise and exceptionalism in the proper historical, cultural, and regulatory contexts, and suggests public alternatives to some of Google’s ambitious projects.
Even though both writers share the initial premise that, to quote Vaidhyanathan, Google is “nothing like anything we have seen before,” they provide different explanations of Google’s uniqueness. Levy opts for a “great man of history” approach and emphasizes the idealism and the quirkiness of its two founders. The obvious limitation of Levy’s method is that he pays very little attention to the broader intellectual context—the ongoing scholarly debates about the best approaches to information retrieval and the utility (and feasibility) of artificial intelligence—that must have shaped Google’s founders far more than the Montessori schooling system that so excites him.
Vaidhyanathan, while arguing that Google is “such a new phenomenon that old metaphors and precedents don’t fit the challenges the company presents to competitors and users,” posits that its power is mostly a function of recent developments in the information industry as well as of various market and public failures that occurred in the last few decades. Quoting the Marxist theorist David Harvey, Vaidhyanathan argues that the fall of communism in Eastern Europe and the resulting euphoria over “the end of history” and the triumph of neoliberalism has made the “notion of gentle, creative state involvement to guide processes toward the public good ... impossible to imagine, let alone propose.” Moreover, the growing penetration of market solutions into sectors that were traditionally managed by public institutions—from fighting wars to managing prisons and from schooling to health care—has made Google’s forays into digitizing books appear quite normal, set against the dismal state of public libraries and the continued sell-out of higher education to the highest corporate bidder. Thus Vaidhyanathan arrives at a rather odd and untenable conclusion: that Google is indeed exceptional—but its exceptionalism has little to do with Google.
Google’s two founders appear to firmly believe in their own exceptionalism. They are bold enough to think that the laws of sociology and organizational theory—for example, that most institutions, no matter how creative, are likely to end up in the “iron cage” of highly rationalized bureaucracy—do not apply to Google. This belief runs so deep that for a while they tried to run the company without middle managers—with disastrous results. Google’s embarrassing bouts of corporate autism—those increasingly frequent moments when the company is revealed to be out of touch with the outside world—stem precisely from this odd refusal to acknowledge its own normality. Time and again, its engineers fail to anticipate the loud public outcry over the privacy flaws in its products, not because they lack the technical knowledge to patch the related problems but because they have a hard time imagining an outside world where Google is seen as just another greedy corporation that might have incentives to behave unethically.
GOOGLE’S REFUSAL TO think of itself as a corporation is not irrational. Rooted in academia, it sees itself as a noble academic enterprise—spanning computer science, artificial intelligence, information science, and linguistics—that just happens to make money on the side. The presence on its campus of such academic luminaries as the Berkeley economist Hal Varian and Vint Cerf, one of the Internet’s many fathers, must further strengthen this impression. That the company’s stated mission—“to organize the world’s information and make it universally accessible and useful”—would not look out of place on the cover page of the Encyclopédie adds to the confusion over Google’s actual institutional status.
Of course, the conquest of omne scibile—“everything knowable”—has been a cornerstone of many utopian projects, from Edward Bellamy’s passionate call for accessible and resourceful libraries in Looking Backward to H.G. Wells’s idea of the “World Brain,” which he described as “a new world organ for the collection, indexing, summarizing, and release of knowledge.” But those were abstract prototypes. Google has actually delivered the infrastructure. Utopians dream, Googlers code. And unlike the Mundaneum—the Belgian initiative, in 1910, to gather and classify all the world’s knowledge to create a “permanent and complete representation of the entire world” (and house it in just one building in Brussels!)—Google is far more realistic, striking partnerships with university libraries over their existing collections. Whatever happens to its troubled book-scanning initiative, the company has already deprived technophobes and cultural conservatives of their key arguments against digitization: it seems that creating a global digital library will not take forever and will not cost a fortune. For this we must be grateful.
Larry Page and Sergey Brin fashion themselves as heirs to Diderot and D’Alembert rather than to Bill Gates and Rupert Murdoch. And conversely, Google’s founders view any opposition to Google as the work of reactionary anti-Enlightenment forces that would like to keep the world’s knowledge to themselves. That the world at large refuses to recognize the purity of their intentions must be a real drain on their psyches. So they have no choice but to shed their idealism and opt for pragmatism—and they do not hide their disgust at this option. Having realized that there are entrenched social and business interests that Google is bound to disrupt, and that the public may not always see such disruptions as valuable (let alone inevitable), Google is aggressively beefing up its lobbying operations in Washington and launching a charm offensive in European capitals. Where ten years ago it would rush into taking an uncompromising stance and fight to the last click, today it is likely to strike backdoor deals, often angering millions of its fans in the process.
For all its uniqueness, then, Google is also increasingly beset by the same boring problems that plague most other companies. Google does a terrible job at integrating all of the start-ups it acquires, greatly alienating their founders. With the rise of companies such as Facebook and Twitter, Google is no longer the most appealing Internet company to work for—nor is it the best paid, with hundreds of talented engineers jumping ship to hotter start-ups. (Earlier this year, Google reportedly offered $50 million and $100 million in stock grants to prevent two of its engineers from departing for Twitter.)
Will Google’s exceptionalism prove to be short-lived? The company was shaped by the early Internet culture—with its emphasis on openness, mutual aid, and collaboration—and so far it has embodied the spirit of that culture remarkably well. For much of its existence, Google’s corporate interests—maximizing the number of online users and minimizing Internet censorship—roughly coincided with those of the general Internet public. Moreover, Google understood the Internet much better than either Apple or Microsoft, its two more mature competitors: theirs was a world of gadgets and software, not of blogs, links, and eyeballs. But set against the giants of the 1970s—the likes of IBM and Hewlett-Packard—both Apple and Microsoft seemed as exceptional as Google did in the early 2000s, and that did not last very long. Few think of Microsoft or Apple today as being in the business of “doing good things for the world” (even though Bill Gates is trying to make up for all those lost years with his foundation). Had Apple or Microsoft come up with Google’s current plan to scan all of the world’s books back in the 1970s, would we feel comfortable granting them such authority, knowing what we do know about them today?
II.
GOOGLE HOPES TO resist its inevitable “normalization” in two ways. Its “dual class” ownership structure resembles that of The New York Times, allowing the founders to control the company’s future with little concern for what others might think. As the company promised during its IPO in 2004, outside investors “will have little ability to influence its strategic decisions through their voting rights.” Thus, at least in theory, Google might be able to resist short-term pressure from the shareholders or the industry. But this leaves open the question of how Google should proceed on issues that are not as clear-cut as Internet censorship. Whatever one thinks of the Sulzberger family that controls the Times, the journalistic tradition to which they belong has produced certain norms—say, that newspapers carry certain social responsibilities—which guide their efforts. Such norms do not yet exist in the online world, and Google is not exactly spearheading a campaign to develop them.
When Eric Schmidt, Google’s ex-CEO and now its executive chairman, proclaims, “I want to be careful not to criticize the consumer for doing things that are idiotic.... We love our consumers even if I don’t like what they’re doing,” it is quite obvious that Google’s corporate mentality, for all the enlightened aspirations of the company’s founders, does not differ significantly from that of Walmart or Procter & Gamble. Google’s ownership structure would be a blessing if the company knew what it is that it wants to protect. So far, all it wants to protect is its ability to carry out wild experiments. But that alone says nothing about its social impact. The defense industry is extremely innovative, too—but this does not automatically translate into civic virtue.
“Don’t Be Evil”: Google’s informal motto is another bulwark against “normalization.” The motto is one of the most controversial—even mythical—things about Google. Both Levy and Vaidhyanathan discuss it in great detail, but neither says much about its actual origins, which are crucial to understanding the motto’s perverse influence over the company. Initially, it signified something much narrower than what it has come to symbolize in the popular imagination. Back in its early days, Google was one of the few search engines that did not mix ads with search results. Instead, links to ads were clearly marked as such in a dedicated section on the site. “Don’t Be Evil” was Google’s informal commitment never to insert ads into its search results. Paul Buchheit, one of the two Google employees credited with coining the phrase, once described it as “a bit of a jab at a lot of the other companies, especially our competitors, who, at the time, in our opinion, were kind of exploiting the users.” The founders’ letter that accompanied Google’s IPO also defined this principle rather narrowly: “We will live up to our ‘don’t be evil’ principle by keeping user trust and not accepting payment for search results.”
That such a narrowly conceived principle—initially thought up in the context of selling ads—was allowed to become one of the key principles guiding Google’s operations (including its dealings with authoritarian states and its stance on privacy) says a lot about the rather frivolous attitude with which Google handles its own ethics. Many analysts—including Vaidhyanathan—believe that the motto, having helped the company build a favorable image, has been just a clever publicity trick. In the long term, “Don’t Be Evil” is bound to prove a major handicap, preventing Google from developing robust ethical frameworks for dealing with the never-ending problems facing the company. Google has unknowingly become a prisoner of its own motto and is at great pains to distance itself from it. There were some encouraging signs that the company was willing to drop it—in 2008, Marissa Mayer, one of Google’s most senior executives, said that the motto “is good p.r. but really it’s empty”—but no substantial changes followed.
Google’s reductionist talk about evil only cheapens the global discourse about the politics of technology, making the two extremely smart doctoral students who founded the company sound like confused first-graders who overdosed on Kant. Such talk is helpful to understanding the complexity of the online world in the way that the “axis of evil” talk was to understanding the world in the era of George W. Bush. Levy’s detailed account of emergency meetings among Google’s executives unwittingly confirms the similarities. Try replacing “Brin” with “Bush” and “Google” with “America” in the following sentence: “But Brin was adamant: Google was under attack by the forces of evil, and if his fellow executives did not see things his way, they were supporting evil.”
Ultimately, “Don’t Be Evil” makes as much sense as a corporate motto as it does as a motto of American foreign policy: it provides no answers to any of the important questions while giving those who embrace it an illusion of rectitude. Even Levy, for all his hagiographical celebration of Google’s prowess, acknowledges that the company has a “blind spot regarding the consequences” of its actions. That blind spot is entirely self-inflicted. It is very nice that Google employs someone whose job title is “in-house philosopher,” but in the absence of any real desire to practice philosophy such a position seems superfluous and vainglorious.
EVERY TIME SOMEONE questions the adequacy of its search results, Google likes to claim that it is simply an algorithms-powered neutral intermediary that stands between a given user and the collective mind of the Internet. On its corporate website, Google compares the presentation of its search results to democratic elections, with the most-linked sites emerging on top. If the top results lead to sites that are politically incorrect or racist or homophobic, the fault is not Google’s but the Internet’s. In this way Google fashions itself as a contentless messenger that works in everyone’s best interest—and with minimal human intervention. By this logic, Google is as responsible for the composition of its search results as a company that prints voting ballots or installs voting booths is responsible for the outcome of an election.
The concept of “algorithmic neutrality” that underpins Google’s self-defense does not stand up to serious scrutiny. What if its hypothetical elections are conducted using electronic voting machines that run on hardware and software that no third party has ever tested for bugs? That its voting machines are managed by “software” or “algorithms” is a very poor excuse not to examine them. After all, the algorithms are written by humans, and they are likely to contain biases and errors. This is not to argue that Google must be required by law to release its algorithms—after all, the dynamics of searching for information in a competitive market environment are not the same as voting in democratic elections. (We can switch between search engines, but we do not have that luxury with elections.) Invoking the language of democracy to deflect public attention from its inner workings is just a ruse. Google’s reluctance to acknowledge the highly political nature of its algorithms exposes the limitations of its technocratic spirit.
Google’s spiritual deferral to “algorithmic neutrality” betrays the company’s growing unease with being the world’s most important information gatekeeper. Its founders prefer to treat technology as an autonomous and fully objective force rather than spending sleepless nights worrying about inherent biases in how their systems—systems that have grown so complex that no Google engineer fully understands them—operate. Levy notices some of this—“Brin and Page both believed that if Google’s algorithms determined what results were best ... who were they to mess with it?”—but he is undisturbed by it. Extolling the supposed objectivity and autonomy of their algorithms allows Google’s founders to carry on with their highly political work without noticing any of the micropolitics involved.
The indispensability of such apologetics to their daily operations also explains their long-running fascination with Ray Kurzweil (who, in Levy’s Bay Area-centric universe, is promoted into a “philosopher”) and his cultish ideology of Singularity—the belief that computers will one day become smarter than humans and take over. (Google is one of the major donors to Kurzweil’s Singularity University.) Kurzweil is Silicon Valley’s favorite Dr. Pangloss, who believes that “the progress in the 21st century will be about 1,000 times greater than that in the 20th century,” while saying nothing about whether such progress would also be accompanied by one thousand times more genocidal horror, ethnic hatred, and needless spilling of blood. And how can the world fail to progress if humans, according to Kurzweil, “are going to enhance our own intelligence by getting closer and closer to machine intelligence”? Kurzweil’s fairy tales must sound really reassuring to Google’s engineers, who like to think of themselves as playing the crucial role in spreading this gospel of progress. Singularity is a very convenient ideology for Google, because it allows the company to bracket technology from the rest of human experience, thus avoiding any corporate soul-searching that might eventually derail its unstoppable quest for innovation and control.
Yet such soul-searching is long overdue. The problem with Google is not that its algorithms are systematically biased against particular social groups, or that they are not transparent: forcing Google to disclose its algorithms would be a pandering populist measure that would probably hurt innovation in search. In 99 percent of all cases, in fact, everything on Google runs smoothly, with most of us never noticing the underlying algorithmic infrastructure that makes it possible. But in the remaining 1 percent of cases, Google acts as if it were part of some nightmarish temple of Kafkaesque bureaucracy, bent on crushing the Little Man at the behest of the Computer. Worst of all, Google refuses to acknowledge this Kafkaesque dimension of its work, denying the victims of its “algorithmic justice” a way to rectify the situation or even to complain about it.
CONSIDER GOOGLE’S FEATURE known as Autocomplete. When you type “Barack Obama” in the search box, right before you hit “Enter,” Google might suggest other related searches based on similar queries of previous users. Thus, you may get prompts to search for “Barack Obama is a Muslim,” “Barack Obama is the Antichrist,” and so forth. This does not teach us anything about Obama, of course—but imagine that you are searching for someone a little bit more obscure, someone who once got drunk at a college party and featured in an extremely embarrassing photo session that eventually made its way online, prompting dozens of students to start searching online. Imagine the reaction of prospective employers who, on Googling the name of a recent job applicant, are prompted to search for “John Smith is drunk at a party.”
This is not a hypothetical situation. Google has already lost several lawsuits in Europe over Autocomplete. In one recent case, a man in Italy complained that Google suggested adding “conman” and “fraud” when people were searching for his name. (The court ordered Google to drop the suggestions.) Google’s reaction to such complaints has been very much in line with its blind faith in “algorithmic justice.” Its spokesperson declared that “we believe that Google should not be held liable for terms that appear in Autocomplete as these are predicted by computer algorithms based on searches from previous users, not by Google itself.” But is it obvious that what is known to a dozen fellow students on a college campus should also be known to the rest of the universe, and forever?
Some modern theories of privacy—for example, Helen Nissenbaum’s account of privacy as “contextual integrity”—would answer that question in the negative. Whether the offensive judgments are made by humans or algorithms is beside the point. Besides, nothing prevents users from abusing the system: if a few dozen bigots deliberately search for “John Smith is a faggot” or “Kate Smith is a slut” so as to damage John’s or Kate’s reputations, there is always a possibility that they might succeed, regardless of the veracity of those accusations. Marketers are already hiring people to conduct searches with the intention of tricking Google’s Autocomplete into producing favorable suggestions.
Google, of course, promises to penalize the abusers, but it does not detect all of them soon enough. Nor can it: there would never be an algorithm that would know how to respect one’s privacy in every possible social situation. The complexity of human interactions cannot be reduced to a set of logical rules that Google—or any system for artificial intelligence—could follow. (It was Terry Winograd, Larry Page’s dissertation adviser at Stanford, who made this point in the mid-1980s in an influential critique of artificial intelligence.) Google’s excuse has been that its system works quite smoothly most of the time, and that its few errors are marginal and can be disregarded in the name of innovation—an explanation that is unlikely to satisfy anyone who was mistreated by the company’s algorithms. This is the problem with all bureaucracies: they work flawlessly until they don’t. And is innovation really an adequate extenuation of ethically dubious structures and consequences?
So far Google has not made much progress in acknowledging the political nature of its services, accepting that algorithms can be as flawed and biased as the coders writing them, or finding a way to help users report what Google got wrong without having to sue the giant. Occasionally, the individuals hurt by Google bring their grievances to the media (a New York Times story about a Brooklyn entrepreneur—who abused his customers so they would complain about his company on blogs and Internet forums and thus boost his search ranking—did help to change the algorithms); but this is not a sustainable way to resolve users’ problems with Google. Yes, there would always be concerns about the freedom of expression and the need to balance one’s quest for privacy with the speech rights of others; and Google would surely need to dedicate a lion’s share of resources to monitor such complaints. But simply refusing to open the Pandora’s box of information injustice that it has been causing—as Google has done to date—is no longer a tenable strategy for the company. It can’t get away by redirecting blame to its own algorithms forever.
III.
THE INADEQUACY OF Google’s “Don’t Be Evil” motto, and the company’s reluctance to grapple with the controversial nature of its algorithms, attests perhaps to one of the defining features of the Google ethos: the firm’s absolute and unflinching belief in the primacy of technocracy over politics, of scientific consensus over ideological contention. In some sense, Google—under the spell of compulsive computation—is acting out Leibniz’s great dream of creating a universal language that would turn philosophers into accountants who could resolve all their differences through number-crunching. Silicon Valley still finds inspiration in Leibniz’s famous dictum that “the only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, in order to see who is right.”
Google has a similar philosophy: all social and political conflicts can be reduced to quantitative models and thus find easy answers that can be computed. When a high-profile Google employee proclaims that “technology is a part of every challenge in the world, and a part of every solution,” it is hard to avoid the impression that Google’s technocentric view of the world blinds it to the primacy—and the permanence—of the non-technological in human life. Its engineers believe that there is such a thing as the ultimate truth and that it is accessible only through spreadsheets and data logs. As Douglas Edwards, Google’s first director of consumer marketing and brand management, put it on his blog: “for ... Larry and Sergey, truth was often self-evident and unassailable. The inability of others to recognize truth did not make it any less absolute ... Truth is, after all, a binary function.” Douglas Bowman, a former senior designer at the firm, is far more critical of such a mentality, asserting that
When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? OK, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions. Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better.
There are many walks of life—and science and engineering are surely among them—where a data fetish is entirely appropriate. It may also be acceptable in the business realm. While it may stifle the creativity of the company’s designers, it is not something that should trouble society at large. But the worship of such hyper-rationalism is completely inappropriate in much of the political and social realms, where truth is anything but the binary function. As Jürgen Habermas prophetically warned in 1968, in an important essay called “Technology and Science as ‘Ideology,’” this is precisely the kind of “technocratic consciousness” that is trying to evict ethics from the constitution and regulation of the public life. According to Habermas, “technocratic consciousness reflects not the sundering of an ethical situation but the repression of ‘ethics’ as such as a category of life,” leading to the disappearance of any distinction between the practical and the technical.
In Google’s case, the company starts in the purely scientific domain—its methodology is fully appropriate for the realms of information theory and computer science—but it ends in the decidedly social and even moral domain of privacy, knowledge-creation, and information dissemination, where such a purely scientific outlook is inappropriate, or at least insufficient. The company has been blind to this distinction, and so were most of the commentators who look upon Google as a blueprint for reshaping social and political institutions.
But what kind of a blueprint could such a decidedly apolitical enterprise really be? Steven Levy offers some clues by providing a brief and typically uncritical look at the interaction between Google and the American government, celebrating the fact that some junior Googlers want “the U.S. government to be more like Google.” Levy is old enough to know that the U.S. government once was like Google: surely there is not much to celebrate in Robert McNamara’s disastrous penchant for systems theory and statistical modeling, and in his (and his “whiz kids’”) futile attempts to reduce complex matters of global politics to purely quantitative issues with binary answers. The strong distaste that Googlers show for the inherent messiness of conventional politics suggests that they, too, see this messiness as a bug in the system and not a feature of the system. When Eric Schmidt complains that “Washington is an incumbent protection machine [where] ... the laws are written by lobbyists,” or when an ex-Googler who left the company to work for the federal government compares her experiences there to that of a “vegetarian trapped inside the sausage factory,” we can hear unmistakable echoes of the technocratic impulse to purge all politics of messy and fundamental disagreements over values, and eliminate all contention under the banner of consensus-seeking and the right answer, and eventually reduce all political conflicts to sterile arguments about management and administration. The only openly political question left in Google’s model of politics is which databases to use in storing data: everything else follows naturally.
WHAT LEVY DOES not seem to notice is that the federal government under Obama has, indeed, begun to look very much like Google, and this may be one of the less visible ways in which the company—or rather, the idea of the company—has changed the world. When the then-candidate Obama visited Google’s headquarters in 2007, he espoused the same belief in the workings of facts, truth, and reason that was evident in the mentality of the company’s founders. Speaking of his plans to counter opposition to his health care plan, Obama said:
Every time we hit a glitch where somebody says, “Well, no, no, no, we can’t lower drug prices because of, yeah, the R&D cost that drug companies need.” Well, we’ll present data and facts that make it more difficult for people to carry the water of the special interest because it’s public. And if they start running Harry and Louise ads, I’ll make my own ads or I’ll send out something on YouTube ... I’ll let them know what the facts are.
And then, expressing his admiration for Google, he added:
I am a big believer in reason, in facts, in evidence, in science, in feedback, everything that allows you to do what you do, that’s what we should be doing in our government ... I want you helping us make policy based on facts, based on reason.
It is anyone’s guess whether Obama still believes this two and a half years into his presidency. The public has gotten somewhat tired of watching him test forty-one different shades of blue. Facts, data, and Internet prowess alone did not get him very far; and it is worth pondering how much more successful he could have been had he not fallen under the sway of the technocratic temper and paid more attention to the ambiguities of the political process instead. Yes, there is much to be said for the primacy of science, and it is better to base policy on facts than on distorted history or myth-making. But in politics, unlike in science, ideology still rules supreme: the reason why conservatives and liberals so often disagree about the same set of data is because they differ over the first principles at the most basic level of political beliefs. If mere exposure to facts and data could iron out such ideological disagreements, politics would already be obsolete. Of course, one cannot blame Google for the lessons that politicians such as Obama draw from its success. Google’s ethos is now an important element of our culture. But one can certainly blame the people at Google for always drinking their own Kool-Aid, and therefore failing to recognize the irreducibility of concepts such as privacy to purely quantitative components. As they expand and enter new industries and regions, their ignorance of the political and social realms could cost us dearly.
IV.
IF GOOGLE’S BOOK-SCANNING efforts are some kind of a neo-Enlightenment undertaking, its efforts at spreading connectivity, building Internet infrastructure, and promoting geek culture in the developing world are a logical extension of the American-led modernization project—aimed at bringing underdeveloped societies to Western standards of living, often by touting fancy technological fixes such as contraceptives (to stabilize population growth) and high-yield crops (to solve the undernourishment problem)—that began in the 1960s. Those well-meaning efforts were not without their problems: as they clashed with local customs and mores, they produced quite insidious unintended consequences, disrupting social relations, promoting coercion, and leading to greater inequality. None of this was obvious to their early promoters. Today, as the Internet has emerged as the ultimate technological fix, its implications for modernization attract considerable attention—and Google is far ahead of most development agencies in wielding its power to change the world.
Google’s caveat to the classical modernization theory—stemming from Walt Rostow’s belief in take-off points, whereby countries, once they reach certain levels in their economic development, tend to move in the same direction—is intriguing. Google naively believes that we have always been modern: we just needed the search engines to tell us that. Eric Schmidt frequently tells audiences that all of us—regardless of nationality or religion—essentially search for the same things online. Britney Spears, for example, is a universal fascination. From this perspective, all that Google is trying to do is to enable citizens of the world to act out their inherent middle-class aspirations. If all of them end up watching silly songs about Fridays on YouTube, it’s not Google’s fault. People are shallow, aren’t they?
Here again, we see Google deferring to the objectivity of the algorithms and the wisdom (or more likely, the dumbness) of the crowds. The possibility that Google managers see only what its algorithms show them never occurs to Schmidt. In this, Google is very much like the good old American modernizers of yesteryear: in the 1960s they were handing out birth-control pills to extremely receptive rural populations in India, fully convinced that this signified the ideological success of Western agendas of population control. The locals, however, couldn’t have cared less. (On one occasion, the pills were even used to build a sculpture.) And while the early generation of modernizers hoped that rapid economic development would help to steer poor countries from the communist path, Google believes that development—underpinned, of course, by the growth of online communications—will help to counter the growth of violent extremism. (Countering extremism is one of the goals of Google’s new think tank “Google Ideas”; in June, the company hosted an international conference, pompously titled Summit Against Violent Extremism, on this subject in Ireland.)
Everyone’s attention is fixed on China and the Middle East, but it is in Africa that Google’s technocratic assumptions about modernization—and information technology’s role in it—need to be scrutinized most closely. Here is just a sample of Google’s African activities: it offers a free messaging service in Kenya that allows text messages to be sent from Gmail, essentially guaranteeing vast local adoption of its e-mail system; it is helping to digitize the Nelson Mandela archives in South Africa, where it also runs a start-up incubator; it administers an online trading platform in Uganda that is optimized for use on mobile phones; it is mapping Southern Sudan; it is holding workshops popularizing Google and the Internet in African universities (one of the company’s explicit objectives on the continent is to make the Internet “more relevant to Africa”); it is testing wireless networks in some select African cities. Google’s long-term plan is to get Africans online, claim the cyber-terrain, and reap the benefits at some point in the future. Few of these activities bring any profit yet, and none of them sound outright controversial—but then neither did birth-control pills or high-yield crops.
Had the World Bank or any other international institution been involved in similar projects—virtually all of them contributing to development in one way or another—this would have attracted at least a modicum of critical attention from the West and perhaps greater opposition in Africa. In Google’s case, barely anyone says a word—not least because of the company’s reputation as an exceptional company that is in it for the values and the ideas, not for the bucks. To its credit, Google does not conceal its ambitions and explicitly invokes the language of development to explain its actions. In Liberia, for example, it asserts that “we aim to turn technology opportunity into development progress.”
But the lack of critical attention by outsiders does not make the underlying development projects any less political or controversial. If the last three decades of studying economic development have taught us anything, it is that good intentions are not enough. To assume that Google, with its New Age hubris, its technocratic mentality, its augmented sense of its own righteousness, its distaste for politics or any kind of ethical soul-searching, would be able to avoid the development traps that have plagued every other big player in Africa is wishful thinking. From the perspective of local governments, to allow a for-profit American company that combines the simplistic worldview of George W. Bush with the cold rationality of Barack Obama to become a key intermediary in their information economies—the one sector that promises to lift the continent out of poverty—may be plain suicidal. The disastrous track record of Google.org—the company’s nonprofit initiative, which was as ambitious as it was flawed—does not exactly inspire confidence in the ability of its parent to deliver on development. Disguising development work as commerce makes little difference.
V.
ERIC SCHMIDT ONCE quipped that “the Internet is the first thing that humanity has built that humanity doesn’t understand.” To a large extent, this is also true of Google. Even its founders—and Page and Brin must have the best understanding of the company’s inner workings—must be profoundly confused about Google’s impact on privacy, scholarship, communication, and power. It is for this reason that writing about Google presents an almost insurmountable challenge. To understand the company and its impact, one needs to have a handle on computer science, many branches of philosophy (from epistemology to ethics), information science, cyberlaw, media studies, sociology of knowledge, public policy, economics, and even complexity theory. The ultimate analysis of Google and its impact on the world still remains to be written.
These new books complement each other nicely. Levy’s volume provides a sharp factual description of Google the company. (Sometimes too factual: it includes an account of the cable arrangements in Google’s conference rooms.) There are many fascinating anecdotes in the book—did you know that Larry Page once considered accepting goats as a legitimate form of payment from those seeking to buy Google ads in Uzbekistan?—but the book has little to say about the intellectual origins of Google and its ethos. Levy drops simplistic phrases such as “the hacker philosophy of shared knowledge” as if their meaning is self-evident and requires no cultural context (the kind of context that he himself provided in his earlier book Hackers). He downplays Google’s connection—both real and perceived—to the Bay Area counterculture of the 1960s and the 1970s, and to all the earlier utopian attempts to organize the world’s knowledge. He over-dramatizes the heroism of Google’s founders. (“Doing good was Larry Page’s plan from the very beginning.”) In a recent interview with Salon, Levy said that the “narrative backbone of the book deals with idealism and morality and how you keep it.” That is a very peculiar interpretation of the book that he has actually written, since its rather perfunctory discussion of Google’s moral quandaries—which, in Levy’s interpretation, essentially boils down to something like “Google’s evil-barometer occasionally malfunctions but is still quite useful”—is one of its weakest points.
Vaidhyanathan’s book is different in tone and in purpose. Its main goal is to highlight the importance of not letting Google become the primary and exclusive guardian of our cultural heritage; and in this the book mostly succeeds. It is a pity that Vaidhyanathan never gets to the heart of the policy debate on some emerging controversies facing Google—“search neutrality” and the “right to be forgotten”—and never suggests ways in which Google’s power could actually be curtailed. His proposal to set up a Human Knowledge Project—a publicly supported initiative that would take a long-term perspective on the digitization and preservation of knowledge—is perfectly laudable, but it does not really address the numerous privacy and surveillance problems that he himself raises.
Calling for more “public governance of the Internet” is also fine, but it is much too abstract: the book provides very few details as to how such governance would proceed. Vaidhyanathan does produce a handful of interesting concepts to describe what the company does, but he shies away from developing them. The notion of Google’s “infrastructural imperialism” is intriguing, but Vaidhyanathan does not really explain what is so “imperialistic” about it. The fact that the Chinese government does not like Google’s services is not necessarily a reason to view Google as the digital equivalent of the East India Company. (Vaidhyanathan does not adequately discuss Google’s ventures outside America and China.)
But Vaidhyanathan should be applauded for forcing the public to pay closer attention to gaping holes in Google’s “technocratic consciousness,” and to the highly political nature of its algorithms, and to the ways in which the company’s rise needs to be viewed against the dwindling influence (and budgets) of public libraries. While Vaidhyanathan would probably agree to the “Before Google/After Google” division of Internet history (B.G./A.G.?), he should also be given credit for trying to explain the historical forces—such as the de-regulation of the Reagan and Thatcher eras—that have so profoundly shaped today’s Internet. After all, there was history before Google. There was history even before the Internet. It is only by placing both Google and the Internet in their proper historical and intellectual context that we will be able to understand their evolution. We need Internet books with greater intellectual ambitions. A serious debate about the social implications of modern technology cannot sustain itself on anecdotes about Uzbek goats.
Evgeny Morozov is a visiting scholar at Stanford University. He is the author, most recently, of The Net Delusion: The Dark Side of Internet Freedom (PublicAffairs). This article originally ran in the August 4, 2011, issue of the magazine.