Three months ago, Mark Zuckerberg was sitting before Congress, promising to change Facebook’s ways. “I started Facebook, I run it, and I’m responsible for what happens here,” he said. “It’s clear now that we didn’t do enough to prevent these tools from being used for harm. That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.” Zuckerberg promised to take privacy concerns more seriously, ensured members of Congress that he was open to regulation (so long as it was the “right regulation”), and pledged to do more to stop bad actors from abusing his platform.
Zuckerberg insisted that this was the beginning of a new chapter in the company’s history. The company would return to its roots as a “village square,” where people shared personal updates and photographs of their loved ones. Shortly after he left Washington, D.C., Facebook began airing national television advertisements in which it apologized for having lost its way.
However successful Zuckerberg’s testimony was as a public relations exercise, it did little to change the real problem. Facebook’s core business model is built around advertising, which means that it will always share its users’ data with advertisers. And, despite the company’s strenuous apologies, it is continuing to push the boundaries of privacy to ensure that its market dominance remains unchallenged. The latest front in that fight is facial recognition.
As The New York Times reported earlier this week, “more than a dozen privacy and consumer groups, and at least a few officials, argue that the company’s use of facial recognition has violated people’s privacy by not obtaining appropriate user consent.” The facial recognition software used by Facebook scans photos uploaded to the social network against a database of “unique templates” of user faces to help recognize them. It does so without alerting the person whose face has been identified or obtaining their consent.
In Europe, which has much more stringent privacy regulations than the United States, Facebook sold its facial recognition software as a tool for privacy protection. “Face recognition technology allows us to help protect you from a stranger using your photo to impersonate you,” the company said. In the U.S., concerns about Facebook’s facial recognition software have been overwhelmed by the Cambridge Analytica scandal, in which a Trump-affiliated firm was given access to the private information of tens of millions of Facebook users. But consumer groups asked the Federal Trade Commission to investigate the company’s use of the technology.
This technology is not new, and its implementation—particularly in law enforcement situations—has been enormously controversial. But it does represent a new technological frontier and every large internet player is gunning for supremacy. Amazon found itself in hot water in May when the ACLU criticized its Rekognition program for “automating mass surveillance” and collaborating with sheriffs’ departments. (Some Amazon employees recently wrote a letter to CEO Jeff Bezos decrying the use of Rekognition by law enforcement agencies.) Google was criticized for its seemingly banal “Face Match” app, which compares users’ faces with classic works of art; the company denied that it was building a database of faces or using the app to train facial recognition software. (That may be true, but Google could presumably still use the data to do either of those things at any time.)
Facebook insists that it does not share its facial recognition software with advertisers. A Facebook spokesperson told the Times that the company’s technology is only used for those who have “their facial recognition setting turned on” and that it deletes the facial data if it is unable to find a match.
But Facebook is presumably gleaning data from its software, which it can then use to help target advertising. This might ultimately mean that such data is safe from companies like Cambridge Analytica, but the software still constitutes a violation of privacy.
There are also questions about how aware users are that Facebook is scanning their face, or if it has done enough to obtain their consent. Facial recognition appears to have been automatically enabled—I am a very light Facebook user and just noticed that it was being used on my profile—earlier this year with minimal notification.
This is in keeping with the defense that Zuckerberg made before Congress. There, he claimed that users had the tools to control their privacy—they just had to use them. He also suggested that, by and large, they also liked being surveilled, since the point of it is to improve their user experience. Some might find ad re-targeting—in which a product that you briefly looked at follows you around for days—creepy, but Zuckerberg told Congress it was popular. With its facial recognition software, the company is making a similar argument, claiming that criticism is overblown and that the technology is being used to improve user experience.
What this tells us is that, despite the advertisements that Facebook has run on television and in The New York Times, and despite its CEO’s doe-eyed apology before Congress, the company simply isn’t changing. Terrified of losing ground to competitors, it’s going to continue its ambitious rollout of invasive technology, which it will sugarcoat by claiming that it has its users’ best interests at heart.