Companies in Silicon Valley are wonderfully fond of describing themselves as “mission-driven.” Palantir has raised nearly $2 billion “working for the common good” and “doing what’s right.” At Theranos, Elizabeth Holmes promised “actionable information at the time it matters.” And for the past four years, Google and Facebook have occupied top spots on Business Insider’s annual list of the “Best Places to Work,” not merely because they dispense such perks as free laundry and subsidized dental care, but also because they claim to offer their workers the chance to be part of a meaningful project of global scope.
In that sense, mission statements are as much directed internally, at employees, as they are externally, at the public. The hours may be long and the competition fierce, the implicit message goes, but it’s worth it because the reward is spiritual as well as tangible. At some tech companies, faith in the mission is encouraged to the point that it resembles religious belief. Employees are invited to see themselves as proselytizers for the transformation of society, spreading the ideas of a company and its leaders around the world.
What happens, though, when the mission doesn’t accord with the behavior of a company or the values of its employees? For many, it has become increasingly impossible to believe that tech firms are working disinterestedly in service of some larger social good. Employees at Google have staged walkouts to protest sexual harassment and petitioned the company to halt its plans to develop a search engine that supports Chinese government censorship. Workers at both Google and Microsoft spoke out against providing cloud services for the Department of Defense, and Amazon employees posted an internal letter protesting how Amazon Web Services provides facial recognition technology to police.
These debates often pivoted around the companies’ mission statements. At Amazon, one organizer wrote, “Selling this system runs counter to Amazon’s stated values. We tout ourselves as a customer-centric company, and [chief executive Jeff] Bezos has directly spoken out against unethical government policies that target immigrants, like the Muslim ban. We cannot profit from a subset of powerful customers at the expense of our communities.”
Like many tech CEOs of the current era, Bezos counted on his mission statement inspiring and lifting up his employees with the sense that they were transforming the world for the better. What the protests make apparent, however, is that it also instilled in them a sense of responsibility, even guilt. Bezos is learning what many tech executives have learned over the past year: that a mission statement only commands loyalty when employees believe it is being consistently upheld; that when they see that it isn’t, they will use it as reason to foment dissent, to strike, to leave.
Google’s ideals were famously articulated in the 2000s with the shorthand “Don’t be evil” (those three words appear in Google’s 2004 IPO filing), but its stated mission, first laid out by founders Sergey Brin and Larry Page in 1998, is far more concrete: to “organize the world’s information and make it universally accessible and useful.” In 2014, Page considered revising this formulation of the company’s goals. After all, Google was no longer just “organizing the world’s information”; it had expanded its operations into self-driving cars and experiments in biotechnology. But Page decided against a change, and that original mission statement has come to haunt them.
“Employees at Google have been fairly vocal about making sure that the company upholds its pledge to not be evil,” Liz Fong-Jones, an engineer on Google’s Cloud Platform, told Fast Company last year. In 2016, she was among 2,800 tech workers at Google, Amazon, Microsoft, and other companies who signed a pledge not to work on software that assisted discriminatory policies like President Donald Trump’s Muslim ban. When news broke last year that Google was planning to create a censored search engine called Dragonfly for use in China, they once again criticized the company for failing to live up to its values. How was a censored search engine, they asked, making information “universally accessible”?
“Our mission is to serve everyone,” Google’s CEO Sundar Pichai said at an event in November. His explanation was a subtle but profound redescription of Google’s stated goals. According to him, the point was not to make information “universally accessible,” but to make Google itself universally available, even if the information it provides is censored. What Pichai was doing wasn’t unusual. Corporate mission statements are cooked up in C-suites, and CEOs can—and often do—change how they are interpreted. Perhaps predictably, though, Pichai’s comments failed to placate his employees, and protests over Dragonfly continued. Google was finding out what it meant to possess a mission with real content, one that isn’t full of “bizspeak and bromides,” as a 2007 New York Times article described the vast majority of statements. The more technical the mission, the more likely it can be wielded as a tool to organize employees against leadership.
In its early years, Facebook articulated its goals in much the same way as Google. At an event for developers in 2007, Mark Zuckerberg spoke of wanting to “help users share more information,” a phrase that echoed Google’s aims of making information “accessible to everyone.” But whereas Google’s language remained the same over the years, Facebook’s evolved to become more communally and socially oriented. In 2009, Facebook’s goal was to “make the world more open and connected,” but by 2017, the company had changed it to giving people “the power to build community and bring the world closer together.”
This might seem like it would be an ideal mission to facilitate organizing. Yet it mattered how the company defined community. “Bringing the world closer together” often meant, in practice, fostering networks that increased conformity. In the rare instances when full-time Facebook employees have spoken out publicly, there have been repercussions. Last spring, Brian Acton, the cofounder of WhatsApp, resigned. A few months later, he gave an interview to Forbes detailing his philosophical differences with company. That day, a Facebook executive wrote a public post calling him “low-class” for finding fault with a company that had “shield[ed] and accommodate[d]” him for years.
The working atmosphere at Facebook—where the product one labors on is also where one socializes with colleagues, friends, and family—is designed to enforce fealty to the mission and, like the product itself, to facilitate the goal of absolute togetherness. In January, CNBC ran an article about what it called Facebook’s “‘cult-like’ workplace.” “There’s a real culture of ‘Even if you are f---ing miserable, you need to act like you love this place,’” one former employee, who left in October, told the network. “It is not OK to act like this is not the best place to work.” Some of those who want to have critical discussions have purchased burner phones, so their comments wouldn’t get back to their managers. Most of the dissent that has been voiced publicly was channeled through anonymous leaks to journalists, not visible protest. And the collective organizing that has happened at Facebook has occurred among contract workers, who don’t enjoy the generous benefits of full-time employees and aren’t subject to the same intra-office cultural pressures.
The Times article suggested that most mission statements aren’t “worth the paper they were written on”—so insubstantial and formulaic that they may as well be computer-generated. Recent events suggest why it’s in executives’ interest to keep them that way: Concrete mission statements assist employees in making concrete demands. Tech companies may be doing everything they can to convince their employees that their labor has a higher purpose, but their workers know better. These companies are still corporations, and their employees are still in the business of turning them a profit.