Google got some good press a few weeks ago when it announced in a blog post that it would be moving forward with its plans to remove third-party cookies from the Chrome browser. The move had been announced early last year as part of the company’s Privacy Sandbox initiative, but now Google has clarified that it didn’t intend to replace those cookies with some equivalent, substitute technology. Other browsers, including Safari and Firefox, already block third-party trackers, but given that Chrome is the most popular browser in the world, by far, with a market share in the 60-something percent range, the news was widely billed as a big step toward the end of letting companies target ads by tracking people around the internet. “Google plans to stop selling ads based on individuals’ browsing across multiple websites” is how The Wall Street Journal put it.
This news, however, met with a fair bit of skepticism—and not only because Google, like other tech giants, has not always honored similar commitments in the past. Even on its face, Google’s plan is hardly a sea change for privacy. It isn’t even true, when you dig into it, that Chrome will no longer allow ads based on people’s browsing habits. Google’s announcement is a classic example of what you might call privacy theater: While marketed as a step forward for consumer privacy, it does very little to change the underlying dynamics of an industry built on surveillance-based behavioral advertising.
To understand why, you have to look at what the company is actually planning. This is difficult, because there are many proposals in Google’s Privacy Sandbox, and it hasn’t confirmed which ones will be implemented, or precisely how. They also are all highly technical and leave open questions unresolved. I spoke with several professional online privacy experts, people who do this for a living, and interpretations varied. Still, the basic outlines are clear enough.
The most prominent proposal is something called Federated Learning of Cohorts, or FLoC. (It’s pronounced “flock.” All the Google proposals, somewhat charmingly, have bird-themed names.) Under this proposal, instead of letting anyone track you from site to site, Chrome will do the tracking itself. Then it will sort you into a small group, or cohort, of similar users based on common interests. When you visit a new website, in theory, advertisers won’t see you, Jane C. Doe; they’ll just see whatever cohort you belong to, say, thirtysomething unmarried white women who have an interest in Bluetooth headphones. As the blog post, by David Temkin, director of product management, ads privacy and trust, puts it, FLoC will allow Chrome to “hide individuals within large crowds of people with common interests.” He touts the technology as a step toward “a future where there is no need to sacrifice relevant advertising and monetization in order to deliver a private and secure experience.”
Privacy experts outside Google have raised questions about precisely how secure the experience will be. Writing for the Electronic Frontier Foundation, Bennett Cyphers notes that splitting users into small cohorts could actually make it easier to “fingerprint” them—using information about someone’s browser or device to create a stable identifier for that person. As Cyphers points out, fingerprinting requires pulling together enough information to distinguish one user from everyone else. If websites already know someone is a member of a small cohort, they only need to distinguish them from the rest of that cohort. Google says it will develop ways to prevent fingerprinting but has not detailed its plans.
The FLoC approach also could end up revealing more information about users rather than less. Currently, when you arrive on a site, whoever wants to track and target you has to match you up against data in a third-party database to figure out your likely interests and demographic information. Under the cohort model, Chrome will simply present that information as soon as you show up: “This person is part of a cohort that lives in Nashville and shops for sex toys.” As long as a site has some other way of knowing your identity, such as a login or a familiar IP address, it should be able to match up that “anonymous” information to an individual file on you.
Google may well come up with ways to prevent these potential leakages. (It has said, for example, that it intends to devise ways to stop IP addresses from being used to target people.) Even so, it would be a mistake to say that these changes amount to the end of tracking and targeting within Chrome. Getting rid of third-party cookies does one thing: It gets rid of third-party cookies. Google’s new frameworks are designed to leave everything else about microtargeted advertising in place. Under the FLoC proposal, users are still tracked; it’s just Chrome doing the tracking. That information is still used to create behavioral profiles; they’re just at the cohort level rather than the individual level. And those profiles are still used to automatically target advertising; it’s just that fewer third parties will get access to them. That sneaker ad will continue to follow you around the internet.
“If your reference point is what Chrome allows now, this is better,” said Peter Snyder, a senior privacy researcher at Brave, a privacy-focused browser. “But if your reference point is what any other browser allows, or what any reasonable definition of privacy really means, this is awful. It solves an extremely narrow problem: Right now, the way behavioral advertising is done is that you build a list of every site I visit and then target ads based on that. That is a privacy harm, but it isn’t the definition of privacy itself.”
Google’s planned changes address what I have called the Peeping Tom theory of privacy, which boils the concept down to the right to not have random strangers snooping on you. This is a totally inadequate definition, because it overlooks the collective dimension of digital privacy. Even if you, personally, avoid being tracked, you still live with the consequences of an economy built on monitoring people’s behavior and using it to target them with ads. The dominant model of advertising undermines quality journalism—an important pillar of democratic societies—by allowing the information logged about a reader to be used to target them more cheaply when they go elsewhere, subsidizing low-value and even fraudulent media. It also helps scammers and liars reach users across the web with little oversight, since everything is automated. It makes discriminatory practices extremely difficult to prevent or even detect, since discriminating between users based on their identities is built into the basic premise of microtargeting. And it supercharges the need for social media platforms to maximize user engagement, because more user attention translates both into richer data sets and more opportunities to target ads. This leads to curation and recommendation algorithms that favor polarizing and even false material, which has been shown time and again to be more engaging.
The Privacy Sandbox fixes none of these issues. Meanwhile, the non-Google ad tech industry is busy devising alternative techniques to track users that don’t rely on third-party cookies—including potentially more invasive methods, like tying users to their email addresses rather than the current version of advertising IDs, which at least can easily be reset.
“This doesn’t deal with the underlying problem of surveillance capitalism,” said Ashkan Soltani, a privacy researcher and former chief technologist at the Federal Trade Commission. “It still incentivizes the exchange of personal information. It still incentivizes the harvesting of personal information. All of that is unchanged. The externalities around things like clickbait or around things like controversial content to generate more clicks and views, misinformation—all of those questions are still present and not affected.”
What would it look like for digital advertising to change in ways that took more of these issues into account? Alternate models exist. Contextual advertising allows ads to be targeted based on the content that a user is reading, listening to, or watching, without knowing anything about the user themselves. (In the Netherlands, the main public broadcaster completely scrapped microtargeted ads in favor of a contextual system, with improved results for both advertisers and the publishers.) First-party targeting lets individual publishers serve ads based only on their direct interactions with users, meaning what you reveal about yourself on that site stays on that site.
Google has said that it will support contextual and first-party targeting models, which could be promising. But, again, the devil will be in the details. A week after its blog post announcing its commitment to privacy, the company put up another post, to much less fanfare, announcing that it would expand the use of “publisher-provided identifiers” into ad auctions. Translation: Advertisers will be able to target individual users based on the data that a given website has gathered on them—and that personal data will, to some degree, still be sloshing around the digital ad marketplace.
Then there’s the fact that the biggest first party on the internet is Google itself. Between searching on Google.com, watching videos on YouTube, using Android phones, being logged into Gmail or Google Drive, or indeed to the Chrome browser, users give Google a tremendous amount of data directly. The Privacy Sandbox leaves all that undisturbed. In fact, by making its dominant browser less hospitable to third-party trackers but keeping its own data spigots flowing, the company will increase its already gargantuan advantage in the advertising market. That prospect recently led the coalition of state attorneys general led by Texas to amend their antitrust complaint, filed in December, to add the allegation that the Privacy Sandbox is anticompetitive, as the Verge reported this week. This is not the first time questions of privacy and monopoly power have been jumbled together. After Apple announced that it would force all apps in its App Store to stop tracking users by default, Facebook, which relies on in-app tracking to fuel its own ad-targeting juggernaut, accused the company of behaving anti-competitively. The difference in Google’s case is that, unlike Apple, it not only controls a key platform for advertising but also competes on that platform as the dominant player.
This doesn’t mean any steps Google takes to restrict third-party tracking are inherently suspect. What’s dangerous is treating the end of third-party cookies as privacy itself, rather than an incremental shift that comes with its own set of trade-offs. This may be a familiar refrain at this point, but ultimately it’s going to be up to the government, not self-interested ad tech companies, to implement a regulatory framework that tackles the broad, collective dimensions of the digital privacy problem. Letting only Google know my secrets might be better than exposing myself to the whole ad tech industry, but not by a whole lot.