Washington, DC - The illusion of choice presented by Google and Facebook amounts to an ultimatum. Having gone nearly a month and a half since the European Union’s General Data Protection Regulation went into effect, it’s now become clear how tech giants like Facebook and Google are handling the new guidelines. Currently, lawsuits by Austrian privacy advocate Max Schrems and a report by the Norwegian Consumer Agency are spelling out the ways the two tech giants are manipulating their users to get around the rule.
Schrems has filed four lawsuits, totaling $8.8 billion dollars against Facebook and Google, that claim a number of abuses related to data minimization and take-it-or-leave-it privacy policies. Meanwhile, “Deceived by Design” is the name of the report that the Norwegian Consumer Agency or Forbrukertilsynet (good luck with that pronunciation) put together. Its 44 pages go in-depth to cover the ways that Facebook and Google in particular are using deceptive settings and dark patterns to work around the GDPR.
Now, before we hash this out let’s take a second to examine the “why?” part of the question before we get to the “how?”
Why would Google and Facebook want to get around GDPR?
Let’s start with what Google and Facebook actually are. They are ad networks masquerading as a search engine and a social media platform respectively. You can test this just by looking at the breakdown of revenues, neither company is making much money off the activity it serves up as its primary function. They’re making money selling ads. Google had around $80,000,000,000 in ad revenue in 2016. Facebook is a distant second in market share, but together the two companies account for 56.8% of US digital advertising. The duo are 1-2 worldwide, as well.
The reason that Google and Facebook can make such a killing selling ads is that they have collected enough user data to accurately target ads to the right people. (For the sake of specificity, when I refer to Google and Facebook I am also referring to brands that fall under one of their umbrellas like Instagram and YouTube, also.)
You can actually see how much data these companies have compiled on you if you really want to, but failing that, think about it this way: they have likely saved everything you have ever typed, clicked, hovered over, paused on or navigated to – everything – from the time you’ve started using that service. Oftentimes they even know what website you went to when you left.
The Cambridge Analytica scandal started to pull back the curtain on what’s really going on with Facebook’s data collection policies, but it wasn’t a full reveal. And by all accounts, Google’s data collection practices are even more involved.
So why would Google and Facebook regard GDPR in an almost adversarial way? Well, almost none of what these two companies do to collect user data is going to fly anymore. We’ll get to the specifics in a moment, but from a high level perspective the GDPR makes it harder for Google and Facebook to accomplish their primary business objective, which is to sell targeted ads. After all, it’s all that data that makes Facebook and Google a practical duopoly in the ad market.
Think about it, pretty much everyone uses Google and Facebook. Try pulling up Bing the next time someone asks you to search for something. Seriously, try it. Google and Facebook are so ubiquitous that in many cases they aren’t just on our smartphones, they are the undergirding for the smart phone itself. They collect from millions of data points on millions of people.
Even on people that don’t use their services.
Plenty of people like to get snarky and say, “oh well that’s why I don’t use Facebook.”
Guess what, neither do either of my parents but if I go to upload a family photograph Facebook instantly recognizes them both, as well as my 3 year-old son, who incidentally is also not on Facebook. Many critics call these “shadow profiles.” Facebook avoids using that term but does admit to the practice of tracking non-users. Google does this, too.
The bottom line is that Facebook and Google have achieved their place atop the ad market by perfecting data collection to the point where anyone, with anything to sell, can find someone who will buy it using the duopoly’s ad targeting.
GDPR threatens to upend all of that.
Deceived by Design
Let’s get back to the report from the Norwegian Consumer Agency, or Forbrukertilsynet, ominously titled “Deceived by Design.” The report was limited in scope to looking at the settings and user interface (UI) provided by Google, Facebook and Windows 10. It specifically highlights the ways in which they are designed to funnel users towards the desired configuration — one that is not privacy-centered.
The findings include privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy-friendly option requires more effort for the users.
The idea behind the GDPR is to give individual users control over their personal data. This includes notifying users when data is being collected, telling them what the data will be used for and sometimes even making the data portable or deletable. None of these things mesh with Google or Facebook’s MO.
The Forbrukertilsynet report lays out a couple of pyschological tricks that UI can play on users to help “nudge” them in the intended direction. In fact, “nudging” is one of the terms it takes the time to spell out. Nudging can be used exploitatively to achieve an intended result. Examples are obscuring the full price of a product, using confusing language or even designing counter-intuitively to buck expectations.
When it’s done deliberately to mislead, it’s sometimes known as “dark patterns.”
“…features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question.”
This is especially problematic when companies like Google and Facebook do it because users generally trust those services. It’s not unlike how Google’s security connection UI, which marked HTTPS websites as “Secure,” has had the unintended consequence of helping criminals phish its users.
[I]f users trust the service provider, many will assume that the service provider knows what is best for the user. This, or a suspicion that tampering with default settings might remove important functionality, may affect the tendency to leave default settings alone.
The report examines dozens of examples from all three companies, paying special attention to the way they may guide or in some cases even mislead users into selecting the optimal setting for the platform — not for their own privacy.
Among the things it found:
- Privacy-friendly choices that are hidden from users
- Intrusive default settings that require long, involved processes in order to change them to more privacy-friendly ones
- Privacy settings being entirely obscured
- Pop-ups that pressure users to make a decision while omitting key information
- No option to postpone some privacy decisions
- Threats of losing functionality or an entire account if certain options aren’t chosen
In other cases, the illusion of control was given when, in reality, very little exists on the user-side.
“Facebook gives the user an impression of control over use of third party data to show ads, while it turns out that the control is much more limited than it initially appears… And Google’s privacy dashboard promises to let the user easily delete data, but the dashboard turns out to be difficult to navigate, more resembling a maze than a tool for user control.”
Five Kinds of Dark Patterns
The Norwegian Consumer Agency highlighted five different categories of “Dark Patterns” that were present during its research:
- Default Settings
- Ease
- Framing
- Rewards & Punishment
- Forced Actions
To create a baseline to judge against, the Forbrukertilsynet ran a pilot analysis in March/April of 2018, then ran the same analysis following the GDPR deadline. It used personal accounts and dummy accounts and analyzed Google and Facebook properties in both their desktop and mobile forms. While we aren’t going to get into every example laid out in the report (though you can yourself), we will take an overview of each category.
Default Settings
The vast majority of people never change their default settings. In fact, a study conducted by Jared Spool of User Interface Engineering found that less than 5% of users change their settings at all. This can be exploited to such a degree that the GDPR — a document that is rarely credited for its specificity — made it a point to discuss default settings in Article 25:
The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.
The way the GDPR is written, it wants these companies to ask for consent before sharing your data with third parties and targeting ads at you. Facebook and Google clearly ignored that.
For example, Facebook presents users with a three-part GDPR pop-up that invites them to manage their data settings in order to adjust targeted advertising settings. The way this pop-up is designed routes a user to just click “accept and continue,” which leaves the setting on by default, rather than navigate to a separate screen to manually adjust their settings.
Google acquits itself a little better on this one, The user still has to actively go into the settings to disable ad personalization and the sharing of web and app activity. But, settings that store location history, device information and voice and audio activity are disabled by default.
Facebook and Google both have default settings preselected to the least privacy-friendly options… Additionally, both services hide away or obscure pre-selected settings so that users who simply click through the “agree” or “Accept” buttons will never see the settings…
Both are also accused of using deceptive wording that helps obscure what clicking “accept” actually means. We’ll discuss this a bit later.
Ease of Use
Both Facebook and Google have designed their platforms to default to the most advantageous settings for them — not the user. The complement to this design decision comes in the form of creating an easier path for the desired results than their more privacy-friendly counterparts. Ease of use is kind of a broad category, but it relates to things like creating some designs that are more intuitive than others or using psychologically pleasing colors to help guide the user.
That may sound kind of silly on its face, but consider the fact Google once tested 41 different shades of blue to determine which received the best user responses and it becomes clear that these seemingly inane design choices have been highly researched and refined.
Let’s go back to the Facebook example from the last section. Look at how it’s designed.
Users who wanted to limit the data Facebook collects and how they use it, had to first click a grey box labelled “Manage Data Settings,” where they were led through a long series of clicks in order to turn off “Ads based on data from partners” and the use of face recognition technologies. This path was, in other words, considerably longer.
Google isn’t much better, constructing a similar pop-up that uses a fully-colored blue button to invite users to click through and accept the defaults, while the alternative required “clicking through a number of submenus, some of which required the user to leave the popup and move into Google’s privacy dashboard.”
One of the key principles of designing a good user experience, a golden rule if you will, is never to interrupt said user experience. Ergo, creating privacy settings that redirect the user or somehow break the experience, is a great way to make sure that users ignore those options.
Framing (Word Choice)
For as long as marketing has been a human concept, much attention has been paid to the wording that’s used. In some mediums, it’s all that matters. Google and Facebook are keenly aware of this and it’s obvious in the ways that both focus on the positive aspects of their preferred choices while glossing over any negative consequences, in an effort to get users to comply with what they want.
Let’s talk about facial recognition because it servers as a good microcosm for the somewhat cavalier way these two tech giants are skirting GDPR compliance.
In 2012, Facebook’s facial recognition technology was disabled in Europe over data protection issues. It was rolled out again in May in conjunction with the GDPR. Now, facial recognition technology deals in biometric data, which is classified as a special category of personal data under the GDPR and requires specific consent from users. Here’s how Facebook framed it:
Upon clicking through the Facebook GDPR popup, users were asked whether they consent to the use of facial recognition technologies. The technology is, according to the popup, used for purposes “such as help protect you from strangers using your photo” and “tell people with visual impairments who’s in a photo or video.” The next screen informed the user “if you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged.” This framing and wording nudged users towards a choice by presenting the alternative as ethically questionable or risky.
This isn’t a complete argument about the pros and cons of facial recognition. It doesn’t explain all the ways Facebook will use the data (for instance, to construct shadow profiles). Only focusing on the positives while almost completely ignoring the negatives is guiding the user to take an intended action.
Google is very similar in the way it presents its personalized advertising. Here’s an example of how its frames the decision to disable the ads:
All of the negative consequences of turning off ad personalization are presented here but you won’t find any of the benefits, that’s because Google clearly doesn’t want it disabled.
Perhaps more troubling is the wording of, “You’ll no longer be able to block or mute some ads.” What does that even mean? First of all, what constitutes “some ads?” How are we categorizing that? By quantity? Subject matter? How much the advertiser paid to display the ad? And not being able to “block or mute” could mean a couple of things. One perfectly reasonable interpretation would be that, “if I disable this, I won’t be able to stop the noise from loud auto-play ads spilling over the walls of my cubicle, so I better not.”
You don’t need to be a legal scholar to figure out this is not how GDPR was intended to be implemented. This is not in the spirit of GDPR and this is not really even all that ethical.
Rewards & Punishments
At the outset of this article we mentioned the lawsuits by Austrian privacy advocate Max Schrems, those dovetail nicely with this section. Schrem’s issue with both Google and Facebook is two-fold. For one, he believes both companies exceed the minimum required to perform their service. Article 5, section C of the GDPR states:
Personal Data Shall Be: adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed;
Clearly neither Google, nor Facebook is abiding by this. For instance, with Facebook you’re agreeing to use a social media network, not be part of an ad targeting database. Facebook is exceeding its mandate.
Shrems told the Financial Times that, “they totally know that it’s going to be a violation… They don’t even try to hide it.”
The second part of Shrems’ issue with the two companies stems from the take-it-or-leave-it approach they are applying to users attempting to excercise their data rights. Here is an example from Facebook that illustrates the decision that’s foisted on users perfectly:
Schrems characterizes the decision this way:
“Facebook gave users the choice of deleting the account or pushing a button [to agree], that is blackmail, pure and simple.”
Looking back to the previous Google example where it tells you that you won’t be able to “block or mute some ads,” that’s the same concept at play. Why is that a penalty? Those functions aren’t contingent upon ad personalization. This is just a penalty for the sake of nudging users towards the intended outcome.
There is a way to do this that isn’t nakedly unethical. From the Norwegian Consumer Agency:
By contrast, at one point of the Windows 10 update, users were presented with a choice between sending “basic” or “full” diagnostic data to Microsoft. Both options included a disclaimer that “regardless of whether you select Basic or Full, your device will be equally secure and operate normally.”
It’s worth noting that Microsoft has a much different scope than Google and Facebook, a much less ad-driven one, but it’s still worth contrasting the way three tech giants approached this.
Forced Actions
At first blush the idea of a forced action may seem silly, but let’s provide a little context. Say you’re on the go and you’re trying to use your phone for something, getting prompted to take an action or consent to something on the spot is going to a.) interrupt your experience and b.) potentially rush the decision so you can use your phone for what you originally pulled it out for.
This comes down to timing most times. When users first received the Facebook GDPR popup, they had two options. Either they could click “Get started”, or they could click the X in the corner to close the popup. Doing the latter resulted in another popup that stated, “You need to finish reviewing these settings to continue using Facebook”. This gives the impression that the user will be blocked from using Facebook until the settings have been reviewed.
That’s actually not even true, users can delay the decision by exiting the prompt, but for most users — who are just trying to post a selfie or upload a snarky status — it creates a situation where the only obvious choice appears to be just consenting and continuing on to the app.
That’s actually not even the most disengenuous thing Facebook does, not by a long shot. Here’s the prompt that desktop users were given:
In addition to presenting this prompt as if there is no delaying and a decision must be made right now, Facebook also makes it seem like there are notifications being obscured by the prompt– the idea being you need to review these settings before you can see them.
Those notifications are fake.
That’s downright duplicitous. The ethics of some of the earlier examples can be defended a bit. This is not a defensible practice. It amounts to outright misrepresentation and manipulation. And while the Norwegian report may be somewhat agnostic about whether this was intentional, this is the same company that was, until this year, selling targeted ads to holocaust deniers and ethnonationalists. Let’s not pretend like this is something it would suddenly be scrupulous about.
Google is not nearly as ethically bankrupt as Facebook, but does engage in similar practices, here’s an example of desktop notification:
This is a bit friendlier, but still requires a user to take an action before continuing to use the service, essentially forcing the action then and there.
Will the EU penalize Google and Facebook?
Undoubtedly. The European Union has already shown a willingness to fine Google billions of dollars, and Facebook is hardly winning its PR campaign or any of the benefit of the doubt that would have come with it.
The bigger question is whether these companies will care? While a $2.7 billion dollar fine, as Google was recently slapped with, would be a potentially fatal crisis for most companies, Facebook, Google and others of their ilk may simply be too big to truly regulate at this point. The way that the GDPR’s fines are set up benefit bigger companies. That affords those companies more leeway to test the regulations.
Regardless of what will happen in reality though, it’s clear that neither of these two companies are acting in the spirit of the GDPR. It’s also fair to argue that neither Google, nor Facebook really give a damn about user privacy. They might if users’ concerns over their privacy drove a mass exodus from their platforms and affected their bottom line, but that looks unlikely. They are just too ubiquitous at this point.
The presumption might be that you can trust Google and Facebook to protect your data and your privacy. But by all indications, that could prove to be naive. It’s up to each and every one of us to pay attention to our settings and try to manage our privacy as best we can. Because nobody else is going to do it for us.
And who knows, maybe at the end of the day most people will decide that it’s all just too much to worry about and click through– they’ll just opt for the path of least resistence and keep using their services.
That definitely seems like that’s what Google and Facebook are hoping for.