Header graphic for print
HL Chronicle of Data Protection Privacy & Information Security News & Trends
Posted in International/EU Privacy

Automated Decision-Making Under the GDPR – A Right for Individuals or A Prohibition for Controllers?

The complexity of the EU General Data Protection Regulation (“GDPR”) is often alleviated by the guidance of regulatory authorities who contribute their practical interpretation of the black letter of the law and provide welcome certainty. However, the latest draft guidelines issued by the Article 29 Working Party (“WP”) on automated decision-making has thrown up a particular curve ball which bears further investigation. It relates to whether Article 22(1) of the GDPR should be read as a right available to data subjects or as a straightforward prohibition for controllers.

Article 22(1) states that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“. In its draft guidelines, the WP categorically states that, as a rule, under Article 22 there is a prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect.

This means that any processing activity which is wholly automated and leads to decisions that impact on individuals in a sufficiently significant way is prohibited unless, of course, such processing can be justified on one of three bases set out as exceptions under Article 22(2), namely: performance of a contract, authorised under law, or explicit consent. This is not an inconsequential legal point. Interpreting Article 22(1) as a prohibition potentially has wide-ranging ramifications.

Given this interpretation, what is considered to be a decision that produces legal effects or similarly significantly affects individuals becomes really key. In the guidelines, the WP considers that, to qualify, the decision must have the potential to significantly influence the circumstances, behaviour or choices of individuals. In thinking through examples the WP then comments that targeted advertising may have a significant effect on individuals depending on the circumstances of the case and in consideration of attributes such as:

  • the intrusiveness of the profiling process,
  • the expectations and wishes of the individuals concerned,
  • the way the advert is delivered, or
  • the vulnerabilities of the individuals targeted.

What this interpretation means in practice is that if the data processing behind online advertising activities strays into the realm of making decisions that significantly affect individuals, this processing is, by default, prohibited. It will then be for those involved in online advertising activities to obtain explicit consent (note the higher standard of what is already a very tough standard!) in order to lawfully use the data.

While the WP sticks to this interpretation throughout their guidelines and even says clearly in a footnote that the controller’s legitimate interests cannot render profiling lawful if the processing falls within Article 22(1), the question is whether the Article 22(1) wording is actually a prohibition or the WP is misinterpreting the law.

By way of comparison, where the GDPR sets out another form of prohibition – Article 9(1) regarding processing special categories of data – the drafting specifically states that processing of special categories of personal data “shall be prohibited.” The use of the word prohibition is certainly not included in Article 22(1). Indeed the language of Article 22(1) is most clearly drawn from Article 15(1) of the 1995 Data Protection Directive.  Looking back at the original European Commission proposal for the GDPR from January 2012, again the language of the Article on profiling (then Article 20(1)) reflects the language of Article 15 of the Directive.

However, significantly, the original Commission proposal included language that is missing from the final GDPR text. The original proposal stated: “Subject to the other provisions of this Regulation, a person may be subjected to a measure of the kind referred to [in Article 20(1)] only if the processing….” is based on performance of a contract, authorised under law, or explicit consent. The same approach featured in the report from the LIBE Committee half-way through the legislative process, but this language was not included in the final version of the Article.

So if those drafting the final version of the GDPR had intended for there to be a prohibition on fully automated individual decision making, including profiling that has a legal or similarly significant effect with limited exceptions, it would be reasonable to expect that the language in Article 22(1) would either indicate that such processing is prohibited or would emphasise that such processing can only occur in certain situations. But the language of Article 22(1) does not state this.

Additionally, if it is a prohibition on the processing of personal data that a controller can carry out, one would logically expect this Article to be in the Chapter on Principles (similar to the way criminal conviction data is treated in Article 10) or under the Chapter on Controller and Processor obligations. Instead, the rules on automated decision-making are included in the Chapter on the Rights of data subjects. In other words, this would appear to be a right that individuals have so as not to be subject to certain decisions, rather than a default prohibition on controllers not to carry out such automated decision making.

As a result, the position that the data protection authorities have taken on this provision in their draft guidelines generates considerable uncertainty. What is clear, however, is that if the interpretation set out in the WP’s draft guidance is the one that prevails, it will have significant consequences for all types of businesses which was not necessarily foreseen at the time of the adoption of the GDPR.