Twelve days before Donald Trump won the biggest upset in presidential history, his data team was feeling confident. “We have three major voter suppression operations under way,” a senior campaign official told Bloomberg, a reference to Democratic voters the campaign hoped would stay at home on Election Day. The campaign might have been down in the polls, but it had an ace up its sleeve: Data supplied by Cambridge Analytica, a London-based team funded by the ultra-wealthy Mercer family. When Trump won, the campaign’s digital operation got a lot of the credit. Two weeks ago, Trump named its director, Brad Parscale, the campaign manager for his 2020 reelection bid.
But on Friday evening, Facebook made a surprising announcement: It was suspending Cambridge Analytica for improperly obtaining—and failing to destroy—data that it had acquired in 2014. A day later, The New York Times and the London Observer published investigations into Cambridge Analytica that revealed that the company had “harvested data” from 50 million Facebook profiles, most of whom were American voters.
Facebook has strenuously denied that this constituted a “data breach.” Instead it has placed the blame squarely on Cambridge Analytica for misrepresenting its intentions and violating its terms of use. But while the harvesting of data isn’t a “breach” in the hacking sense, that doesn’t mean Facebook was an innocent victim. Yes, Cambridge Analytica is a bad actor, but it was also using Facebook the way that Facebook is intended to be used: to sell people products, to guide their choices—in short, to manipulate them. Facebook’s response to the data harvesting reveals a company that not only has lax privacy protections and an existential aversion to transparency, but also one that is publicly in denial about what it is selling.
The scandal revealed by the Times and the Observer dates back to 2014, when Cambridge Analytica hired Alexsandr Kogan, a Russian-American academic, to harvest Facebook data and build “psychographic” profiles that would be used in persuasion campaigns. Kogan then built an app and enlisted Facebook users to take a survey, for which they were paid a small amount of money.
At that time, Facebook’s terms of service gave app developers access to a wealth of data. In the old terms of service, app developers would not only get the data from the profiles of those using the apps, but from all of their friends as well. So Kogan harvested the data of the 270,000 people who took his survey, as well as their mates. As first reported by The Intercept, this gave Cambridge Analytica data from 50 million Facebook users—30 million of which became subject of psychographic profiles.
Facebook ultimately changed this policy, but for intellectual property reasons, not privacy ones. But, at the time that Kogan collected this data, it was a widespread practice—so widespread, in fact that President Obama’s much-vaunted 2012 data team openly gushed about the program.
Where Kogan went wrong was when he provided the data to Cambridge Analytica and continued to use it for the purposes of political persuasion after the company’s terms of service had changed. According to the Times, Kogan insisted both in the fine print of the survey and to Facebook that his interest in the data was purely academic. But Kogan used this data to aid Cambridge Analytica’s work on behalf of the Ted Cruz campaign in 2016 and then, later, the Trump campaign. Given that the 2016 election was decided by fewer than 60,000 votes in a handful of Midwestern states, it is possible—though unlikely and hardly definitive—that its work swung the election to Donald Trump.
At this point, Cambridge Analytica’s woes are extensive. Because it employed a number of foreign nationals—mainly Canadians and Brits—it may have violated federal election law. Because of its relationship to the Mercers, who provided millions in funding, it may have violated federal campaign finance law as well. In December, moreover, special counsel Robert Mueller requested that the company turn over internal documents related to the 2016 election. Representatives from the company also may have lied to investigators from the British and American governments about its activities during the 2016 election.
Facebook’s own response to the scandal—to minimize its importance—has only made things worse. Facebook admits that it was aware that Cambridge Analytica was using the data as far back as 2015, and chose to basically slap the company on the wrist. More importantly, Facebook continued to work closely with the Trump campaign to help target voters and refine its messaging. This, despite the fact that the campaign had hired a data company Facebook knew was improperly using its data. Facebook has not explained why it did nothing until faced with the reports from the Times and the Observer.
Facebook says Kogan obtained the data legitimately, therefore this does not qualify as a data “breach,” which means that the company had no obligation to let users know that their data may have been used in a massive social media campaign helmed by a presidential candidate. Cambridge Analytica has echoed this argument, pointing to the 2012 Obama campaign, which harvested data in a similar fashion. This is a bit rich, however, since the million or so people who filled out the survey used by Obama’s data team knew that the information they provided was being used by the Obama campaign. The 270,000 responders to Kogan’s survey had no idea they were providing information to the Trump campaign, and they probably didn’t suspect that all their friends were being roped in as well.
As others have pointed out, this defense gets right to the heart of Facebook’s problem. Cambridge Analytica was basically using Facebook as it was designed: as an enormous and enormously valuable trove of data about people. Facebook’s apparent indifference to Cambridge Analytica’s malfeasance for the past two years is an acknowledgment of this basic reality. The occasional bad actor—and there have been several—is the price of the company’s business model, which is to sell its users’ data.
That business model may be in trouble. Facebook’s value has soared over the last several years because it and Google effectively have a duopoly on digital advertising, thanks to the staggering amount of data they collect about their users. (Facebook’s data, however, has historically been more valuable because of its ability to connect data with specific individuals, although Google has backed away from its promises to keep data aggregated and anonymous.) Facebook’s stock dove on Monday in the wake of the Cambridge Analytica reports because states and members of Congress are already demanding explanations—and investigations. This could mean more regulation, which could endanger the company’s most valuable and important asset.
Facebook really only has itself to blame for this mess. Even with tweaks, the company has consistently privileged data collection and monetization over user privacy. This has allowed it to become one of the most powerful and valuable corporations on the planet. But it has also made it the perfect platform for shady influence campaigns. Of course, the biggest problem with this scandal isn’t that Cambridge Analytica is shady—it’s that Facebook is.