The ruling says the website owner illegally shared the user’s IP address with Google. AFAIK, this is an incorrect interpret of events. The website merely tells the user’s browser that the content is intended to be displayed using a font that, if not installed on the user’s computer, can be downloaded from Google’s server. It is the the user’s browser that initiates a request to Google’s server. A request by the website itself to Google sharing the user’s IP address never actually occurs.
While you somewhat correct, in that the browser sends the request, but it is not a 'can be downloaded' but rather an imperative saying 'get that font from that server'.
In the end, the w3c standards define, that browsers execute the commands they receive from the server and in this case, the server tells the browser to download the font. So the site-owner configures his website in a way, that this site instructs browser to share the IP address.
This is the essence of CDNs, though. Every offsite CDN is subject to this same ruling, meaning any developer trying to use a third-party CDN for something as simply as loading jQuery is subject to this. For example, on load, https://evanandkatelyn.com/ grabs stuff from: twitch.tv (embedded player), youtube.com (embedded player), facebook.com (likely just a like button), and what I assume are several wordpress CDNs (c0.wp.com, i0.wp.com, s.w.org, ssl.p.jwpcdn.com).
If this ruling is upheld, either (a) browsers need to immediately stop interpreting these commands, instead providing user prompts for _each offsite load event_, or (b) a very large swath of websites are all open to the same legal issue. As a small example, the Aesop wine company site (https://www.aesopwines.com/), made with Squarespace, uses typekit, squarespace, and google CDN loads. They're subject to the same ruling, right? And so on, and so on...
> browsers need to immediately stop interpreting these commands, instead providing user prompts for _each offsite load event_
No, why should they? The ruling makes the (pretty realistic) assumption that users are in no position to decide about individual load requests. Therefore, those are the responsibility of the site author.
This way to interpret the events seems most consistent with real-world usage. Meanwhile pretending the user is responsible to vet any individual network requests seems like a legal fiction - except there is no reason why it should be applied.
By this argument, then, should third party requests always be blocked? If the user "is in no position to decide", that means that the only way to avoid potential liability would be to load everything from the same domain, right? No CDNs, no off-site scripts, no off-site embeds, ever. Seems a bit extreme to me.
To my knowledge, there are other avenues beside consent under which the GDPR allows data exchange with third parties - in particular if such an exchange is essential for fulfilling the service. The point here was though, that the data exchange was not "essential" because you could simply self-host the fonts or proxy the request through your own servers.
But yes, it would seems to me that this interpretation of the law sort of communicates that third party requests should be a measure of last resort. That would definitely cause a shift in current web dev practices, but I'm not sure it's a bad thing.
> avoid potential liability
I think "potential" liability is an odd criterion. Any law is a risk of potential liability. If the effort to find out if a law actually applies to you is already too much, then I guess anything less than anarcho-capitalism would be unacceptable.
Do you, as the website operator, have the right to copy and serve these fonts to your visitors? (Actual question; my guess is that you don't according to Google Fonts, but could be wrong.)
> proxy the request through your own servers
Isn't this worse? Assume that your visitor does not want Google contacted at all as part of their visit; isn't, then, the potential leak of an IP address simply a side effect? The website is still leaking timing of when a visitor accessed the site, potentially their usage patterns...
Personally, I think this is a bit of an absurd argument... I think, at most, consent should be enough for third party requests. I was mostly responding to GP's claim that the user can't reasonably consent to such use.
> Do you, as the website operator, have the right to copy and serve these fonts to your visitors?
Good question. I have no idea, but apparently the court thinks self-hosting is ok in this case.
> Isn't this worse? Assume that your visitor does not want Google contacted at all as part of their visit; isn't, then, the potential leak of an IP address simply a side effect? The website is still leaking timing of when a visitor accessed the site, potentially their usage patterns...
There is a specific set of data which is defined as "personally identifiable information". IP address is part of that set, but I don't think timing information or anonymous usage data are. So the question in this case is specifically "does the request leak information defined as PII?". You can prevent that effectively with proxying: Google would only see the IP address of your proxy but not the address of the user.
> Assume that your visitor does not want Google contacted at all as part of their visit
I don't think a user can enforce this under the GDPR. They only have a right to block you from sending their PII to Google, not to block you from talking to Google at all.
> Do you, as the website operator, have the right to copy and serve these fonts to your visitors?
All the fonts on Google Fonts are open source. When GDPR came into force in 2018 I downloaded all the fonts I needed, checked their licenses, and uploaded them on my servers along with necessary notices as required by the licenses.
The matter could also be sidestepped if the CDN were to offer a GDPR data processing agreement (DPA) and would make guarantees about the locations of servers. The free public CDNs understandably don't do this, and it seems Google Fonts is not covered by the Google Cloud DPA.
At least in Germany (possibly also other European countries) a design pattern only loading Facebook/Twitter/Youtube/... content with explicit user consent is nowadays pretty common.
But shouldn't the site owner pay for a CDN and host the resources themselves? In which case the CDN wouldn't own the IP information. I think the problem here is that the website author is getting free bandwidth in exchange for their user's IP address, which in the example Google can then use for tracking and other things in exchange.
> But shouldn't the site owner pay for a CDN and host the resources themselves?
Not sure I understand this. Whether you pay for a CDN or not, you'll still be guilty of sending the user's browser to an external domain without consent (because it happens before the page is fully loaded). The only GDPR-compliant solution seems to be self-hosting everything.
That's true but the mitigation to that is that it would have been OK if the user has consented to this "data processing".
The court isn't ruling this sort of technology en bloc but says in its ruling that it is a problem because the user didn't consent to his personal data (IP address) being given to a third party (Google in this case).
Personally I have mixed feelings about this ruling too because this sort of technical solution is widespread and an army of GDPR vigilantes has the potential to cripple large portions of the web by filing similar suits. Or we won't be able to access websites without having to go through entire multi-page EULAs and consent forms for every and all kinds of similar 3rdparty technology embedding.
Law is a blunt tool and will have unintended consequences, unfortunately :(
A lot of websites won't serve addresses from Germany.
I've seen companies doing that with just the GDPR cookie warning, it wasn't worth rewriting code and annoy non-EU people with the warning so the detect IP address and redirect to a page saying they don't serve that region.
Let's be honest, what have we gain from the cookie warning?
That is a minority and mostly only US-centric sites that are otherwise chock full of advertising/tracking technology - exactly what was GDPR meant to deal with.
However, GDPR and this type of ruling has EU-wide impact because of the single market (e.g. a French website can and does server also German customers). Businesses (especially the ones from the EU) can't afford to not comply or to not serve customers within the EU.
There is an important point to this ruling that shouldn't be omitted:
> Der Einsatz von Schriftartendiensten wie Google Fonts kann nicht auf Art. 6 Abs. 1 S.1 lit. f DSGVO gestützt werden, da der Einsatz der Schriftarten auch möglich ist, ohne dass eine Verbindung von Besuchern zu Google Servern hergestellt werden muss.
To roughly translate: One can use Google Fonts without forcing users to make a request to google servers (by downloading the fonts and serving them locally) so this doesn't fall under GDPR (which allows sharing/using user data if it is necessary for functionality).
Which would most likely include CDNs but a point could be made for things like youtube and twitch where that isn't really possible/feasible.
Edit: One addition to the "necessary" part: Necessary for what the USER wants to do when visiting your site. Might be arguing semantics but this is law after all, which is all about semantics
Then get Squarespace to stop pinging random third parties on page load. The website owner is paying for Squarespace, why is it loading Google CDN (and Google trackers?)
It's loading fonts. So squarespace needs to host those fonts, fine. But more to the point, it could be argued even the Squarespace CDN is "different" from the actual website, so we need CDN shims that forward local domain requests to the CDNs and return the results. All to hide an IP number for downloading fonts." Moreover, "host it yourself" is easy if you're technically skilled, but very, very difficult if you aren't.
> And hosting a font file entails dumping it next to your index.html file and adding some very basic CSS. Not exactly difficult.
If you are a 60-year-old woodworker living in Appalachia trying to set up an online store to sell hand-carved flutes, this task is essentially impossible.
So they will have outsourced their website to some external entity that does possess the required technical knowledge. This required technical knowledge should include the ability to host a simple file.
The 60-year-old woodworker living in Appalachia will be relieved to know that browsers are able to display text in their online store without having to add any font files at all. If the 60-year-old woodworker living in Appalachia decides they absolutely must have a custom font on their website then self-hosting that font file is not any more impossible than adding the HTML/CSS required to fetch it from Google.
Caching was never mentioned as being a requirement. I'm only giving the most basic solution for achieving compliance with regards to hosting a font file.
Scope creeping, on a Sunday no less... where are we headed
ianal, but I think CDNs would not be affected by the ruling, since they serve an important function.
Google Fonts was deemed illegal here since it's not necessary and you can easily provide a font in a privacy-preserving way.
Not a lawyer as well but I'm not sure about this. Let's use the "jQuery served by a CDN" example here: You can easily argue that using jQuery is necessary for your site to function but there is no real benefit to the user by doing this with a CDN when you could just ship jQuery from your own server. AFAIK the benefit of CDNs is largely nullified nowadays by browsers using a different cache for each primary domain anyways, so you can't even really point out a potential benefit for the user (faster load times) here.
> the w3c standards define, that browsers execute the commands they receive from the server
I'm no expert in the matter, but this seems a little convoluted to me? To me, the server does not issue instructions, per se, it returns a declarative text/binary response that describes the sturcture of the website, it is then up to the browser, that the user installed and chooses to use and may configure (and possibly configure to leak their data, even if spec-adhering behaviour of rendering the webpage should not), to attempt to understand the document and retrieve any other resources that may assist displaying the content correctly.
On the other hand, if one was to send CPU instructions back to the user, I guess it's also there choice to execute them...? Also, it's not possible to determine which resources are for display purposes (fonts), and which are for tracking purposes, the browser will blindly have to retrieve the resource, so websites have a certain responsibility to issue privacy-respecting "instructions".
I'm trying to argue both sides here, I still believe that the user chooses voluntarily to use the browser, visit the webpage and therefore parse the document and initiate any subsequent requests that the document proposes, on the other hand, this is beyond most people, they just want to view a frickin' website, so perhaps the lives of web developers should be made harder to make the lives of the average Joe, who is not an IT expert, a little easier? The architecture of the web is inherently not privacy-respecting, in order to save bandwidth (and for sake of simplicity), we only send fragments and let the browser choose what else it needs, which can be tracked.
It's like walking in a park. You choose to show your face to people, we've come to just accept the fact that by the laws of nature, we cannot prevent other people from seeing our face (unless you use a mask, but then you make them very uneasy), we leak data that others can remember and use to identify us later.
Try making this argument with compiled code instead of HTML:
"The company included the code to do $BAD_THING in the binary executable, but it was the user's choice to run it, and he could have easily modified the binary to ignore $BAD_THING, but didn't. Therefore, it was the user doing $BAD_THING, not the company."
A lot of people in this discussion are splitting hairs here, trying to blame the user or the browser. The technical details of what a browser does are less important than the end effect: Fundamentally, the web developer added "stuff" to the HTML, knowing that this "stuff" will cause most browsers in the field to access these fonts on another computer. The fact that the end user could technically block it, doesn't change the developer's intent.
Correct. The question for law is: what does a reasonable person expect?
This is a problem because people (in general) are not good at understanding or reasoning about what computers do... and the entire purpose of the web is to put a simplifying, abstract model in between what humans want to do and how computers work. Models are always wrong, sometimes useful. The web is very useful because it is wrong.
If the web were more like Gemini -- I'm not advocating for this -- every link would be an explicit change, and the argument that a reasonable person would be aware that different things came from different entities would be solid. If JavaScript existed but a web page could not request any resource from a non-origin domain, the argument would be solid.
It's not reasonable for a random user to have to internalize a model that says that sometimes the font used by a page is local, sometimes it is supplied by the website, and sometimes it is a call to a third party. It's true, but it's not reasonable.
From my experience, I think the average user treats the browser and the internet as a black box anyway, they don't reason about what is happening. As long as they can get to what they want, they don't really care what happens in between. Cookie notices get in the way, and therefore annoy them. Most also just accept the fact that there data gets leaked everywhere and there's not much that they can do about it. I genuinely don't believe that the average user can make sense of the TOS that they agree to when signing up for something...
This is definitely not a good thing, and should change, but I also believe that ad-driven companies will continue to find a way, we just continue to rack up operating complexity, which in turn, favours large companies. This ruling seems like a pretty weird and unhelpful way (in the grand scheme of things) of helping protect user privacy, but then again, that was not the goal of the lawsuit.
Yeah, I guess this stands. But HTML is not executable. It has to be parsed, like words in a book, not chemicals in a tube. Who is liable, the person who creates the poison, or the book (encyclopedia) which describes the process (and therefore the person who wrote it/distributes it)?
Again, I'm not saying what is right and wrong, but I think this issue is fundamentally much, much more complex than the court may have thought, and more importantly, may have repercussions on almost every website out there.
> Yeah, I guess this stands. But HTML is not executable. It has to be parsed, like words in a book, not chemicals in a tube. Who is liable, the person who creates the poison, or the book (encyclopedia) which describes the process (and therefore the person who wrote it/distributes it)?
I know as soon as you typed this, you probably thought "oh crap, what about Python?" so I won't go there.
I think the major underlying thing here that makes developers uncomfortable with this court ruling is that the whole industry of software development has a chronic and pervasive problem with the idea of consent. I'm not saying individual software engineers don't know what consent means, but we constantly put out software that does things without giving the user informed consent and control, and resist all efforts to force us to ask for this consent.
Imagine trying to use the Software Industry's idea of consent when dating: "Hey, Alice, do you want to go out on a date with me? I'll only accept the answers [Yes] or [Ask me again later]". Ridiculous! But software regularly does this! "Hey, Bob, I love you and I'm going to keep sending you text messages. Do you want [all my text messages] or [only essential text messages]?" Ridiculous, but look at the "consent" options when it comes to cookies.
Not to be crass, but when software wants to get users to do something, they need to treat it as if the software is trying to get laid: You need to ask for, and receive, informed consent at every step of the way, at every new and different request. This is an uncomfortable idea to developers who are used to just commanding the code to do things.
I didn't, actually, but it's a fun thought experiment :)
Python code is instructions that are executed in a very precise manner, they could, in theory, encrypt a hard-drive. You execute the program the same way you execute binary instructions.
HTML is a description of a problem, there are different interpretations, and different solutions, depending on screen-size, etc. You don't always get the same result. Technically, JS could mine crypto, but I don't believe that's illegal (correct me if I'm wrong?), just very inconvienient, and there wasn't any JS involved here. You could make a browser that leaks data due to misinterpretation of the HTML. The problem also lies with the eager-evaluation of HTML, it's difficult to put a disclosure or ask for consent, such as the responsibility clause of most software licenses, without greatly annoying the user...
> makes developers uncomfortable with this court ruling
Guilty. At the end of the day, we need to get stuff done, without creating a private internet infrastructure for our customer. The small guy is at a disadvantage here, not big companies.
Not really. You are making this much harder than it has to be by bringing in irrelevant technical arguments. The court mostly cars about intent and effect, not technical minutiae.
Also by your logic machine code isn't executable either since it too has to be parsed (by the CPU which transforms it into microcode instructions).
In your paradigm: shell, Python, Ruby and Java programs are not executable either, since they require interpreters before they become machine code. Java calls this bytecode and wants to run on a whole JVM, which is rather like a browser.
To follow down the difference here I think that the browser is much more aware of what it is doing than compiled code. It know what this is a stylesheet or a font, it knows it is coming from a different origin and more. Technically the browser is very capable of blocking all third party resources (if you assume that party == domain).
With compiled code stopping it from executing one particular instruction or feature is much harder.
Basically I think the declarative is a significant difference.
Of course I don't think the law thinks that. Otherwise we would have implemented technical measures for limiting cookies such a permissions (with legal backing) rather than these legal-only ones which are abused more often than implemented as they are supposed to be.
I think you confuse user, the Human, and user, the Programming Idiom.
User, the Human, is not going to be asked weather or not the browser should open every one of the possibly hundreds of references in a web page!!
Now, user, the Programming Idiom, might be configured, programed, etc.. to behave differently, but the reality is that is that's not how the modern web works. If the browser is not configured to behave the way it comes from the box by Google, Microsoft or Apple, most pages will not work correctly.
So I think your configuration/technical argument is purely theoretical. The court cannot expect the average user to understand do any of that.
> User, the Human, is not going to be asked weather or not the browser should open every one of the possibly hundreds of references in a web page
Exactly, because we strive for a balance of convienience with complexity. Most people wouldn't mind downloading a font from Google, they already use it directly anyway. This isn't how law works, as an engineer, I'm just trying to find some sanity in what to me seems like an insane court decision with possibly very big repercussions for the rest of the internet.
But User, the Human has the means to command its User Agent, the Browser, to conveniently skip loading whatever they would not like to see, like fonts, executable scripts or ads.
Your post basically amounts to blaming victims of malware and spyware.
"Sure your honor, the victim died by carbon monoxide asphyxiation, but it was his choice to inhale the gas, even though it smells the same as normal air"
I'm not trying to put any blame here, we can twist metaphors to support either side of the argument.
I definitely think that websites have a huge responsibility in keeping the user safe, but this feels to me like an over-extension of GDPR that will make websites much more difficult to develop in the future for the layman without a team of layers. It's a font, there was no malicious intent.
But I am trying to put blame: people who wrote the malware/spyware code are to blame. Similarly to people who write website code that leaks user personal information. The choice to embed third-party code was made by them.
It is nowhere near reasonable to ask a common user to protect themselves from such things: they might not have the technical expertise. They might not be using their own computer (library, etc). The browser doesn't provide enough tools for it and requires third-party solutions. Third-party solutions can either cost money (Little Snitch), additional hardware (Pi-Hole), are not available in all browsers (uBlock Origin due to its interface) or require technical knowledge (other ad-blockers that use lists).
This is called plausible deniability- the site owner knows what he is doing, but does not care. He could have downloaded the font or used a analytic service which does not collect IP address. Great ruling.
>The ruling says the website owner illegally shared the user’s IP address with Google. AFAIK, this is an incorrect interpret of events.
I wouldn't say so. By making use of the Google Fonts service, the website owner set up a scenario where the browser would then share the user's IP with Google. That's the default behavior of most browser setups. It's as good as sharing with Google directly, no? I feel like the scenario is similar to setting up a trap. Technically the victim activates the mechanism, but surely the one who sets the trap carries the blame?
> Technically the victim activates the mechanism, but surely the one who sets the trap carries the blame?
Well said.
Law is not a programming language, the fact that the website didn't _technically_ share the IP, but did it through the browser, is not relevant.
I agree, but the definition of the law can also be interpreted many different ways, until it's clarified, I guess. This seems to me like a very grey area.
There was no trap, in my opinion, document clearly specifies that an additional resource, here a font, will help the website look as intended by the designer. It's visible and its effects are well known (it's part of a well understood specification) and can be blocked. Websites have a responsibility, absolutely, but this is just feels like going too far...
The going far feeling might come from the Overton window being pushed away in a direction. Regardless of what's right, healthy, good or bad usual things feel normal, and unusual will feel like going too far. The thing that matters in this feeling is what someone is gotten used to, which is not an objective quality of the thing, but an attribute of the viewer.
Not a lawyer, but to my knowledge, GDPR does not care if something technically "can be blocked" with some effort. It cares if there was clear, voluntary consent to share a particular bit of data - which wasn't the case here.
Then GDPR should blame the browser vendors for shipping with JS execution enabled by default and demand that JS execution for all browsers be turned off by default. To repaint the stories spun by the grand parents: If I hold up a dagger and announce the fact, why would you run into the dagger anyway without protection? Put on some armor, dude. The client browser had all the information it needed to not make the request (geolocation, external resource, purpose of external resource) and yet it did. I know this is just shifting blame but it's also a good argument for returning HTTP 451 to EU clients and be done with it.
>If I hold up a dagger and announce the fact, why would you run into the dagger anyway without protection? Put on some armor, dude.
Let's say I'm dumb, and I run into the dagger that you're holding. The case is then investigated by the law enforcement. Who do you think they'll blame? Would I be deemed guilty, and would my crime be not having armor on?
I'm not either, and neither are most developers. My takeaway from this is GDPR doesn't care, leaked data is leaked data. I'm just worried about this implications this will have for non-malicious intent that the internet has evolved to use over time. Perhaps this is for the better, but I fail to see that future at the moment.
This is, for better or for worse, how the internet works. There may be better alternatives, but we're stuck with this for now. The truth is that an extraordinary amount of websites use a third-party resources, jQuery from CDNs, fonts from Google, etc. This ruling will never stand in higher courts imo, because it would break the internet through fear.
I'm curious to know whether DNS and your IP being in the the header of packets travelling through various different countries that can be sniffed is also considered as unwilful data sharing?
This ruling will 100% be upheld in the higher courts.
The website is arguing that they have a legitimate interest in downloading fonts from Google in client browser, but as the court correctly states the website can provide these fonts directly. There is no reason to infringe on the user privacy, so there is no legitimate interest. And therefore use of Google fonts was without a legal basis.
BTW - The website could have used a different legal basis out of 6 available, like consent. See: https://gdpr-info.eu/art-6-gdpr/
> I'm curious to know whether DNS and your IP being in the the header of packets travelling through various different countries that can be sniffed is also considered as unwilful data sharing?
Unless there is another way to achieve the same purpose there is a legitimate interest in processing that data for the purposes expected by the client i.e. providing internet service.
> The website is arguing that they have a legitimate interest in downloading fonts from Google in client browser, but as the court correctly states the website can provide these fonts directly. There is no reason to infringe on the user privacy, so there is no legitimate interest. And therefore use of Google fonts was without a legal basis.
Would the same argument apply to using Strip or Paypal to accept credit card payments? The site could deal directly with a lower level payment processor which would reduce the number of third party entities that see the user's credit card.
I wouldn't say so, especially because the in the shops I encountered, they explicitly state that "Payment will be handled by XY provider. You'll be redirected etc etc". That's not exactly using a resource from a third party in the background.
This clearly falls into Art. 6 GDPR paragraph 1, point b) :
Processing shall be lawful only if and to the extent that at least one of the following applies:
b) processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;
This legal basis is a lot more clear and a lot less stringent than point f) legitimate interest as it does not explicitly require you to establish "legitimacy" and balance it against vague "interests or fundamental rights and freedoms of the data subject".
f) processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
I'm not a layer (web dev in my spare time), but how far does "provide more directly" go? A private ISP? There's no limit, only what seems to be considered by the courts as "reasonable". Then again, that is how law is interpreted most of the time, no?
- the data subject has given consent to the processing of his or her personal data for one or more specific purposes;
- processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;
- processing is necessary for compliance with a legal obligation to which the controller is subject;
- processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
You can do basically anything with things like IP addresses as long as you have valid consent from the client i.e. they need to actually know, or at at least be able to learn, what you are doing with their data and decide that it is ok. So, no guessing here, just be transparent, and assume no consent by default.
In case of ISP they have to process your personal data because it is necessary for the performance of a contract of providing the internet service. Also, no guessing here.
The legitimate interest clause is a "catch all" clause for anything that legislator did not think about, so it is very vague by design. You do not want to choose this as a legal basis for data processing if you do not want to deal with legal uncertainty. But if you do choose it, you should have strong arguments that you really need this legal basis.
If similar companies to yours are able to do exactly the same thing in a way that is less impactful on privacy then you can expect that courts will not grant you a legitimate interest.
You can also do legal tests do determine whether you have a legitimate interest:
- The purpose test (identify the legitimate interest);
- The necessity test (consider if the processing is necessary); and
- The balancing test (consider the individual’s interests).
Also, based on my observation if you are not doing anything really egregious and you are willing to cooperate with data protection agencies (DPA) you do not have to worry about anything. If DPA decides you are doing something wrong they will tell you about it. And if you just adjust, like start to host fonts on your servers, they will let it slide or give you a small slap on the wrists. The really high fines are reserved for malicious conduct or gross incompetence with actual harm already done to people.
> This ruling will never stand in higher courts imo, because it would break the internet through fear.
It wouldn't break the internet. The internet was fine when the vast majority of sites hosted all their own content and didn't ask your browser to load crap from dozens of domains. It wasn't even that long ago. Honestly I think it was better.
But it has since evolved, greatly, in complexity. Just because things were like something once, doesn't make it easy to go back. Hey, I'm all for more privacy, I'd like to go back to how it was before but keep the good parts from today, but this would make it harder for the small guy without some advancements in IT, private CDNs and easier font management. IT is already a nightmare just to keep it from breaking.
That we’ve been doing a certain thing in the past, is no excuse to allow it to continue going forward. It is a good that we are challenging practices that we have taken for granted and validate whether we want such practices to continue.
"Can be sniffed", and "Provider is making a third party sniff" are two different things. Legally and ethically too.
Right now you're right, the internet works this way. But that doesn't make it right, or fair, or anything, it just is. And it's also no reason it couldn't work in a different way.
> I'm curious to know whether DNS and your IP being in the the header of packets travelling through various different countries that can be sniffed is also considered as unwilful data sharing?
The IP has to be there for the return TCP packet, so under GDPR this falls under "strictly necessary" information.
If someone sniffs you, they now have your PII. They can't do anything with it that is not "strictly necessary" without your consent, otherwise they're also on violation of GDPR.
The only people trying to "break the internet through fear" are the doomsayers.
Is it strictly necessary to have that many intermediate parties to handle TCP packets with the user's IP?
You can instead peer with the user's ISP, or install a machine into the user's network (something like a amazon echo / google home could work too) which establishes an encrypted tunnel to your main servers. Sure it would be more expensive to do this, but so would hosting your own copy of a font instead of using a CDN like Google Fonts. What's strictly necessary doesn't mean what's necessary in order for you to host the site cheaply.
It is considered strictly necessary under GDPR, yes, because TCP/IP (and UPD) is how the internet works.
Something being "strictly necessary" under GDPR also doesn't mean that each intermediate entity can do whatever they want with the IP address.
> which establishes an encrypted tunnel to your main servers
Grandparent was talking about "packets travelling through various different countries". This is just TCP/IP. Using a tunnel won't change this, intermediate routers will still see your IP. Your idea is no different from HTTPS.
If you don't want intermediate routers seeing your IP you have to lay 100% of the infrastructure between the customer's house and your website. Again, this is not how the internet works. And GDPR already covers potential privacy issues that might arise in this case.
> The difference is that now your IP is what all the intermediate servers see instead of user's private data (your user's IP address).
Nope. Your IP is also visible by each router in-between when using such a tunnel if the machine is in the user's network (in your Amazon Echo or Google Home). You need alternative infrastructure to bypass the internet.
Installing a machine directly in the ISP building is no different from Carrier-grade NAT that is already widespread. It also leaks some data about you that can be deanonymised. It is also extremely expensive.
Sorry, I don't mean to play the devil's advocate, this has already gone way off-topic so take what I say with a pinch of salt.
But technically, the IP is not strictly necessary? I can imagine a feasable future where it could be replaced with an anonymised IP from a larger pool generated by your ISP, with TLS for the payload. This could be solved at the internet infrastructure layer, and not required by to be solved by website developers.
> I can imagine a feasable future where it could be replaced with an anonymised IP from a larger pool generated by your ISP, with TLS for the payload.
This is already a thing with NAT and Carrier-Grade NAT.
However if the IP + port + time trio, coupled with other information (such as browser, stack, timezone, behavior) can be used to de-anonymise the user, this also instantly becomes PII.
> This could be solved at the internet infrastructure layer, and not required by to be solved by website developers.
It could, but until we get there, website developers will have to deal with it.
Identifiability for IP addresses uses an even lower standard. The GDPR says that for something to be truly anonymous, there must not be any “reasonably likely” means for identification, even with the help of third parties, even when relying on additional information. There has of course been litigation about this, in the form of the Breyer v Bundesrepublik Deutschland case. It was based on the GDPR's predecessor law, but it used virtually identical phrasing so the conclusion still holds.
The European Court of Justice constructed a hypothetical scenario to show that identification can reasonably be likely. Let's say the website was attacked by a hacker. In a logfile, you find the attacker's IP address and want to prosecute them. So you report the incident to whatever authority is responsible for such incidents, which then gets a court order so that the attacker's ISP discloses information about the IP address. As long as the ISP knows to whom that IP was allocated at the time, there is now a reasonably likely chain of events that leads to identification of the person behind the IP address.
In this case about Google Fonts, the court says that it's sufficient if the website operator or Google have the “abstract means” for identification, not whether they actually did this for this plaintiff's specific IP address.
A solution would be if the EU forbids ISPs from keeping such logs, but given repeated attempts at mass data retention laws for national security purposes and pressure from the IP industry^W^W film and music industry for copyright infringement prosecution purposes, that doesn't seem likely.
To handle resources, like a jQuery library, I'd love seeing URNs being used. A Universal Resource Name is supposed to uniquely identify a resource solely by its name, and say nothing about where to find it - which is the job of its sibling, the URL. A website could state that they need "urn:uuid:6e8bc430-9c3a-11d9-9669-0800200c9a66", and then the browser could decide where to look that up. In my local cache? The cache distributed with the browser? The ISP's repository of resources? The original first party? My VPN provider's fancy anonymized lookup service? Whatever the case, it feels like a robust way to handle shared resources, and of course to introduce a myriad new ways to break UX but hey it's progress!
By that logic, any and all tracking pixels, javascript, iframes, etc would be regulatory no man's land, because all of those are technically just "intents" the server signals.
Nevertheless, users are seldomly in a position to decide whether or not those intents are followed (and site owners can get quite mad if a user instructed their browser to "decline" such an intent e.g. through an ad blocker). All of that makes it reasonable to treat those intents as commands.
Users are enabled to set policy by disabling JavaScript execution by default. If GDPR sees an issue with current default policy, it should mandate that policy is in alignment with user expectations by default, by disabling policy. For web content, after all, doing everything on the server side is not impossible.
Where does it say they see an issue? They just argue that web assets are the responsibility of the site developer, not the user - which is the exact opinion site developers have as well in pretty much every other context.
It's also the expectation of users: Most users aren't experts and don't know what javascript, web fonts or GET requests even are.
You can't have your cake and eat it too.
> Users are enabled to set policy by disabling JavaScript execution by default.
They aren't. All mainstream browsers have removed the option to disable JavaScript. You have to install 3rd-party plugins to get the option back. Those plugins frequently break sites.
>The website merely tells the user’s browser that the content is intended to be displayed using a font that, if not installed on the user’s computer, can be downloaded from Google’s server.
"Your honor, I merely told the gun to strike the firing pin. Without a round chambered in, the gun wouldn't have done anything."
> and this can then be used for advertising purposes.
Can it? Is this within the range what Google is allowed to do in the EU right now?
Because, if that is the case and we also wanted to stop that, wouldn't it be a lot more reasonable to just... forbid Google from doing that, instead of slapping every confused wordpress hack in the EU with a fine?
Google cannot escape the US government agencies (CLOUD act) etc.
It doesn't matter what the promise.
They could sell their software stack to an independent European partner over whom they don't have any control and who doesn't transmit data back to the US.
If China can mandate Google to do something like that and having Google submit to it, effectively escaping US jurisdiction for this part of the world, why wouldn't it work in the EU applied to a completely different set of goals?
US jurisdiction doesn't even protect its own citizens against their government requesting data from google about them, why would it protect those of Hong Kong from their ... oh wait i get it
> Can it? Is this within the range what Google is allowed to do in the EU right now?
Google's current GDPR consent screen is not compliant. It provides an easy "accept" option but no easy decline option, which is against the regulation.
Given they are already breaking the law and successfully getting away with it (otherwise they'd stop), why would they not break it here?
In fact, it doesn't even have to be malicious; the data can accidentally be fed into a dataset that's used for ad targeting - maybe it was set up that way a decade ago, nobody knows about it and it isn't entirely obvious considering the entire targeting machine is a black box with thousands of parameters so it's impossible to definitely prove what data was used to target a particular ad.
It really depends. If the website links to a URL that is the same for everyone on every website which uses that font, then without a referrer header (which is up to the browser to send) there is not much tracking info.
But if the website uses a URL that is unique for that site, or even for each user, that is absolutely something I'd hold the website owner responsible for.
Of course. Good point. If an individualized URL would be used, it would be another story.
Though I don’t think that Google Fonts URLs contain individualized parameters by default that disclose either the user’s IP address or the site visited. The ruling also does not mention that this is what happened here. All the site user did, from what I can see, is embed a Google Font.
Had the site owner put an automatic JavaScript redirect to Google on his page, he’d be just as liable, according to the logic of this ruling.
This isn't "just" initiating a request to a random third party server.
Chrome sends a unique ID when accessing (only!) google servers, in the form of X-client-data HTTP header, uniquely identifying the user, and the site he is browsing (via referrer). It's a goldmine.
No, defendant made website in a way that request to Google is part of the required requests when you visit their website.
It does not ask person browsing for permission to do the request to Google. Probably there was also no mention anywhere that code of website will be connecting to a 3rd party server to pull fonts.
Your point was that Chrome sends additional tracking data specifically to Google. That is all Google’s doing.
But also let’s break down the many many things wrong with this case:
1. Your IP address is shared with many different infrastructure providers none of whom you know about: the CDN that is serving content, the server hosting providers, the TLS certificate revocation log where your browser checks if they given domain’s certificate has been revoked, the DNS server you used. It is a given that your user agent will tell half the Internet that you IP is trying to access a given website. Unless everyone hosts their own infrastructure entirely (no CDN, no external APIs, no external DNS servers, no leasing servers let alone AWS or similar), we will never not leak IP addresses. Your only solution to this is to use something like Tor.
2. Asking each and every website to create an increasingly more complicated consent form for every service they use is going to create a huge anti pattern. The cookie consent forms already suck harder than a Dyson. Why would anyone want more of that?
The correct solution is for Google to be punished for doing evil shit and to also build all the privacy controls into the user agent. This would still lead to some consent forms but at least the UI would be uniform and easy to understand. The current situation sucks bad and this case will make it worse.
That was not my point but of other user. Even though I can continue argument.
People that made website could make a website without linking to Google fonts.
All the other things you list like hosting providers are technically necessary to deliver website content or like request to TLS revocation log are implemented by browser and not by the defendant.
Yes complicated consent forms should get even more fucking annoying so companies should loose traffic if they don't think twice about using some 3rd party service.
If I have to click through 5 consent forms, I leave and company is not getting business. Other company that is caring about people data will have no need for such consent form and will be getting more business.
KEEP in mind that if you have your own cookies for your page to work like session cookies that are technically needed to login or use page, there is no consent form needed at all. People should just stop visiting pages that require consent.
Who does make my browser request the resource? Clearly the website owner who can also decide to make the browser load a resource fully under their own control. The user could block it, certainly. But I don't want my browser to ask me for every single URL if it's okay to request a resource.
So does this mean assets hosted by CDNs are illegal now? Since it doesn't ask the user's permission to direct the browser to another site to download said assets? And what if they're already cached in the browser. Distinction?
Seems like laws don't understand how tech works...
You’ll soon have to fill out a form as if undergoing surgery just to visit a website. (And, of course, were all gonna click “Accept all”, just as we do with cookie warnings and Apple TOS.)
These judges do not know what they do. They only care about getting the case off their desk, clinging to the first semi plausible argument that allows them to do so. If you look for vision, guidance or responsibility for shaping the future I which we want to live, look elsewhere.
>Seems like laws don't understand how tech works...
That's pretty much the definition of a law. Laws are written for people, by people. Not for computers.
This is sadly an unintended consequence of a very broad interpretation of GDPR, where an IP address is deemed as personally identifiable information.
If someone wanted to take this to absurd levels, similar argument, as has been made by this court, could be used to make the entire Internet illegal under GDPR - sending a packet exposes personal information of the sender to various third parties (routers of various ISPs) that the sender didn't consent to. And given that the recipient (service provider, website, whatever) could have arranged for having the data transferred on a floppy disk or by a pigeon instead ...
It is not only GDPR that suffers from this - e.g. in Austria it is illegal to drive with a dashcam because their court has ruled that a dashcam amounts to recording someone without their consent - and if you do so, it exposes you to a 20k€ fine!
Absolutely not surprising. I am originally from Germany and the whole of Bavaria is excruciatingly underdeveloped when it comes to IT.
Outside Munich, it gets even worse as you venture deep into beer county where people who can reformat Windows are admired as the next Linus Torvalds and competition consists of people with varying degrees of beginner-level knowledge competing against each other.
Merkel once said that the internet is a new territory for all of us... I am surprised the court even understood half of this.
Bavaria only? If it had been up to Germany’s buerocrats, the whole internet would not exists.
(Also, no cars. For calling a Ford Model T an “automobile” if it still needs a human driver is misleading to consumers, according to, again, the Munich district court.)
Where is my personal responsibility in making sure that the food I buy isn't poisoned and that the producer doesn't use slave labor? Where is my personal responsibility in making sure that my friend will return me a loan I made to him? Of course it is sensible to make basic precautions, but it is also sensible to expect social institutions like the legal system to help.
Our society is immensely complex; it is quite naive to assume that everyone has enough time and power to watch out for things like that and fix them. It is also quite dystopian to think that people should by default treat every stranger and company as adversary.
> It is the the user’s browser that initiates a request to Google’s server. A request by the website itself to Google sharing the user’s IP address never actually occurs.
Manipulating a system so that it gives up information that wasn't intended to be given away, is called hacking.
When I specify that the font used on my page can be accessed at a particular URL, I’m neither asking you to download it (the font might already be installed on your system), nor am I requesting that, if you do download it, you pass along a “referer” header. I am not only not making your browser do this - I’m not even asking it to do so.
> When I specify that the font used on my page can be accessed at a particular URL
But this is not what you are specifying with Google Fonts (usually). You are saying: "To properly see this page, download this instruction file (CSS) from this other host". It doesn't matter if I have the font already installed, I still ask the CSS file to Google.
I doubt that WWW was designed with intention to allow that. A bunch of different people with different goals do stuff they want and the result is something that just happens without anyone's intention. And we have legal systems and regulations to clear up such situations.
It was explicitly designed to do that. HTML is explicitly designed to allow content from multiple places; it is designed to do so. Your browser is designed to read that HTML and fetch those resources to render the page. The internet is designed to cache that content. Your browser cache is designed to cache that content so future fetch requests are faster.
There's so many pieces explicitely designed to do this, for exactly this reason, that there is no question as to this being the intended behavior.
Font servers are decades old. CSS added support for it in 1998, replacing earlier less common methods. This is not new or unintended behavior.
Technology changes behaviour and therefore society, "don't use it" is not an option in many cases, for instance in the old days it was common pay bills by filling out forms attached to a bill and then snail mail or walk into the bank or a postal office for a clerk to either manual or automatic process it.
With internet banking much of that old style payment system has disappeared and many banks no longer accept that style of payment or even has an office that you can visit and if it is still possible there is hefty fee attached. Not using a browser is not really an option in current society.
Would that logic also work for loading images from other sites and services? I never sent the info to the server myself but told the browser to load the image from that server. Is you argument that it’s client-side code so doesn’t count?
The technical implementation details don't matter.
What matters is that the IP will be shared with Google as consequence of visiting the site as long as the user didn't take additional actions and without the user having took additional actions which made that happen.
This will also happen if I place a link on my site that does not clearly warn the user that it leads to a non EU website. User clicks it, and his IP address gets disclosed.
Really, if you participate in the World Wide Web, of course your computer’s address will be visible to others, and you can not always control it. Like driving on the Autobahn. People will be able to see you. It’s part of life.
GDPR differentiates between functional necessary and non-functional necessary parts.
And again technical nit-picking do not matter, but user intend does.
Similar that side you link to would also need to be GDPR compliant (or not provide service in EU countries).
The problem with google fonts is that the side which loads them agrees in your stead without your permission to google collecting your data and using it for non essential use cases.
While when you navigate to an side, it must not collect data beside purely functional data until you agree to it. (and yes collecting IP address can be purely functional, depending on what you do with it and if you delete it
in time. E.g. for security logs and DDoS protection it can be purely functional. (if some conditions are meet)).
Be aware what counts as "neccessary"/"purely functional"(1) is not always fully clear.
What about embedding a Google Map? I see those even on websites of German courts.
Are those allowed because the embed is needed functionally?
Or are they not, because you could instead use a cached image of the map and switch to a “live” connection to Google’s map server only when the user actually tries to move or zoom the map (and after displaying a consent pop up)?
So if I put a really giant mirror and burn your house at 3PM it's the Sun's fault?
The ruling is actually quite logical. The (convoluted) outcome is that the IP is leaked and it should take any tech person about 5 minutes to realise this.
If I place a link that says “Search”, and instead of starting a search on my site, I send you to Google.com, your browser will connect to Google’s server and thus telling them your IP address. Such a hyperlink that does not clearly warn users that clicking it will cause you to connect to someone else’s server would also have to be illegal according to this ruling.
Which is the same thing with cookies, you just set some string in a HTTP header, if the browser actually honors that is up to user (in the way what browser he installs and how the user configures it).
Exactly. Your browser preferences are your cookie settings.
The whole business of littering the web with cookie consent forms is as far from a sensible technical solution to the problem as can be imagined. The people who invented the web and who designed browsers have had at least the aspiration to build a system that’s going to work as a whole. Courts and lawmakers, in the other hand, have no such vision.
The ruling explicitly says the right owner does not need to take precautions, because such an obligation would restrict the right holder in the exercise of their rights worthy of protection.
If you ask me, this argument is what we in Sweden call "satan reading the bible". I think you're fully aware that the practical result, i.e. what happens in reality, is that as the court puts it:
> The transfer of the user's IP address in the above-mentioned manner and the associated encroachment on general personal rights
Furthermore, the court is correct in stating that:
> The use of font services such as Google Fonts cannot be based on Article 6 Paragraph 1 S.1 lit. f GDPR, since the use of fonts is also possible without the visitor having to connect to Google servers.
Let's not be naive, we all know the purpose of Google offering these fonts for free via their CDN. As the author of a website, I think it's completely sensible that you should be responsible for the decision of embedding these fonts from Google rather than just serving them yourself: you are, in fact, "leaking" the IP addresses of your visitors to Google without their consent.
This is in my mind just another one of those things that have been considered completely normal for a long time, but really shouldn't. A bit like how literally everyone used Google Analytics 15 years ago without really thinking about what that meant for the ethical processing of personal data.
You’re making a good argument here. And you might just be right. You’re saying that website owners should be legal-politically responsible for typical privacy risks incurred by users if they are using popular browsers. And perhaps that is the right way to go.
However, what strikes me is that the court hasn’t even seen this problem at all. Your train of thought - that the request was issued from the user’s browser, but that the site owner was essentially in control because he essentially tricked the user into sharing his information with Google without being asked - is just not being discussed at all.
It might be very well that the end result is just. But then it would be a case of the blind chicken finding a grain.
GDPR specifies otherwise than your interpretation, logical as yours may be from a technical standpoint.
The site operator chose an optional way to embed fonts in a way that divulges PII to a non-GDPR destination. As there is no legal or technical requirement to embed Google Fonts, the site operator is therefore liable.
If use of Google Fonts was mandatory for the web to function, then the site operator would not have been found liable. It is not: they can be mirrored locally, or simply not used, and the web as viewed from the user’s perspective will continue to function just fine. (IANYL, etc.)
The website owner decides what's asked to be done. The browser is still owned by the end-user can choose to make the request. This is why ad-blocking is fundamentally required.
I can put “rm -r /user” in my HTML as long as I want. It’s the user’s browser that decides what gets executed. This is a fundamental principle in the architecture of the internet. You cannot make another computer do anything. You can only send messages, and the receiver decides how to act on those.
Except you cannot. If you are aware of a vulnerability that triggers the destruction of user's data from the browser just visiting a website, abusing it would be illegal.
No because hacking generally circumvents the intended system design. The architecture of the World Wide Web explicitly allows for user discretion in fetching and executing linked content. There's no slippery slope from using a system within design parameters and executing malicious instructions on other people's systems without their permission and/or knowledge.
Good point. There is a distinction to be made. For fraud, we have those distinctions. If I send you an email that looks like an invoice for a service you already ordered, but is really, at closer inspection, an order form, I am still responsible if I planfully designed the email so that the average recipient would be fooled. This also means that if I send that email to grandmas I’ll be held more accountable than if I send it to lawyers. These are all important discussions. The ruling we see here just doesn’t enter such discussions, because the court hasn’t even recognized the problem.
But I didn't identify myself as a policeman. Just a request from some random person on the internet. The person I asked to do it didn't comply, because he knows better what instructions to obey. So should his browser.
Let's say you know I'm the kind of person to easily give people money. I just got my paycheck, and you knowing this, you ask me to give you the money. I'll give you the money because that's the kind of person I am. But then I'm left with no money for the month, and all of its consequences.
Who do you think would carry the blame in this situation?
A regular user has no idea how these mechanisms work and can't be reasonably expected to do the configuration themselves. Since it's a clearly privacy-hostile behavior to include this code in the first place, the operator of the website should be prosecuted.
The court did not even see this issue, i.e. the distinction between issuing a suggestion or directive vs. actually executing the request directly. The court, in fact, states that the was the website itself that did the sharing. The ruling suffers from unsound reasoning.
It could be, however, that the website owner never brought up these arguments. The court does not have to do its own investigation. This is called the “maxim of disposition” in German law. Whatever both parties agree on what is true has be treated as true by the court.
So if the claim that the website itself issued the request to Google servers was uncontested, then this ruling is sound, based on the claims brought forth by the parties in this particular case.
If you are that caring about your privacy, you absolutely should use a browser that is configured in such a way that it doesn't leak your IP to anyone you didn't consent to.
I'm running systems that I'm modifying to the extent that satisfies me, but that's not the concern here. Right now I'm caring about everyone's default privacy level, not just my own. And in this ideal world, third party sharing is not opt-out.
Yes, definitely. Ad absurdum, browsers could be mandated to have the user opt-in to every single instruction that is executed. It's technically possible, the user has control.
I think it's a slippery slope to imagine/enforce a transfer of agency between the website user and provider, where the latter will try to make the opt-in appear as simple as possible. An ideal opt-in is more than the click of a button, it's an understanding. A button accompanied by a wall of text isn't understanding.
Code in the frontend is absolutely not "asking". I'd bet that most of the users have no idea what's going on in their browsers and devices, and those instruments then, in turn, shouldn't prey on this ignorance. I know that this is not how the world works, but a difference is that we could have control over this one.