“Just” Algorithms: Justification (Beyond Explanation) of Automated Decisions Under the General Data Protection Regulation
Categoría del artículo: Research Article
Publicado en línea: 20 nov 2021
Páginas: 16 - 28
DOI: https://doi.org/10.2478/law-2021-0003
Palabras clave
© 2021 Gianclaudio Malgieri, published by Sciendo
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The regulation of automated decision-making (ADM) in the General Data Protection Regulation (GDPR) is a topic of vivid discussion. If the first commentators focused mostly on the existence of a right to an explanation in the body of the Regulation, the following discussion has focused more on
This paper argues that if we want a sustainable environment of desirable ADM systems, we should aim not only at transparent, explainable, fair, lawful, and accountable algorithms, but we also should seek for “just” algorithms, that is, automated systems that include all the above-mentioned qualities (transparency, explainability, fairness, lawfulness, and accountability). This is possible through a practical “justification” statement and process through which the data controller
This justificatory approach might be not only comprehensive of many other fragmentary approaches proposed thus far in the legal or computer science literature, but also might solve many existing problems in the artificial intelligence (AI) explanation debate, for example, the difficulty in “opening” black boxes, the transparency fallacy, and the legal difficulties in enforcing a right to receive individual explanations.
Section 2 mentions the general rules about profiling and ADM in the GDPR, while Section 2.1 refers to the debate about the interpretation of those rules. Then, Section 3 addresses the definition and limits of the concept of “explanation,” and Section 4 tries to propose some preliminary, tentative solutions to those limits—adopting a systemic accountability approach. Developing upon these first elements, Section 5 introduces the concept of Algorithm Justification, while Section 6 contextualises this concept in the legal field and Section 7 in the GDPR field, explaining on which bases a “justificatory” approach is not only useful but also necessary under the GDPR. Further developing this challenge, Section 8 addresses how an ADM justification should be conducted, considering in particular the data protection principles in Article 5. Finally, Section 9 proposes a practical “justification test” that could serve as a first basis for data controllers who want to justify ADM data processing under the GDPR rules.
The GDPR has tried to provide a solution to risks of automated decision-making through different tools: a right to receive/access meaningful information about logics, significance, and envisaged effects of automated decision-making processes (Articles 13(2), lett. f; 14(2), lett. g; and 15(1), lett. h) and the right not to be subject to automated decision-making (Article 22) with several safeguards and restrains for the limited cases in which automated decision-making is permitted.
Article 22(1) states as follows: “the data subject shall have the right not to be subject to the decision “is necessary for entering into, or performance of, a contract between the data subject and a data controller”; “is authorised by Union or Member State law to which the controller is subject and which also lays down “is based on the data subject's explicit consent” (Art. 22(2)).
In cases (a) and (c) from the above list “the data controller shall implement
In sum, in case of a “decision based solely on automated processing, including profiling, which produces legal effects concerning [data subjects] or similarly significantly affects [them],” individuals have two different safeguards:
eventually, the
The interpretation of the GDPR rules about automated decision-making has generated a vivid debate in legal literature. Several authors have interpreted this net of provisions as a new right to algorithm explanation (2); while other scholars have adopted a more sceptical approach analysing limits and constraints of the GDPR provisions (3) and concluding that the data subject's rights are more limited than expected and that there is no right to explanation. (4) In addition, other scholars have preferred a contextual interpretation of Articles 13(2)(f), 14(2)(g), 15(1)(h), and 22 of the GDPR, suggesting that the scope of those provisions is not so limited and that they actually can provide individuals with more transparency and accountability. (5) This last view was also partially confirmed by Article 29 Working Party, which has released some guidelines on profiling and automated decision-making. (6)
However, scholars have proposed different ways to address the issue of suitable safeguards, as indicated in Article 22(3). To mention a few examples, some scholars proposed a model of counterfactual explanations, that is, a duty to clarify for individuals targeted by automated decisions, among others, “what would need to change in order to receive a desired result in the future, based on the current decision-making model.” (7) Other scholars proposed a “legibility model” for automated decisions, where transparency and comprehensibility should (8) exist so that individuals are able to understand autonomously (readability) the importance and implications (comprehensibility) of algorithmic data processing. (9) Looking at the broader picture of the GDPR, some scholars proposed a more dynamic link between existing data protection rights (access, erasure, rectification, portability, etc.) in order to react to adverse effects of automated decisions (10) or to also focus on the dualistic nature of the GDPR, based both on individual rights and on a multilevel design of algorithms (co-governance). (11) A corollary of this proposal is a system of multilayered explanations based on a data protection impact assessment of algorithms. (12)
Most of these proposed solutions are based on transparency, explainability, and interpretability of artificial intelligence.
(13) In general terms,
In the field of Computer Science, explanation of AI has been referred to as making it possible for a human being (designer, user, affected person, etc.) to understand a result or the whole system.
(16) Tim Miller, analysing the structure and expectations of explanations, identified four characteristics of explanations:
(17) (a)
The GDPR (and, in particular, the provisions in Article 22 and recital 71) are often interpreted as referring to only “one” kind of explanation. Actually, there is no unique explanation in practice;
(21) each form of explanation highly depends on the context at issue.
(22) More importantly, the capability to give a fair and satisfactory explanation also depends on the possibility to show
These last considerations may lead to an insurmountable dichotomy: either we
In addition, explanations are not only sometimes problematic, but also not sufficient to make AI socially and legally “desirable.” In particular, several scholars reflected upon the “transparency fallacy” of algorithmic explanation, (24) that is, the risk that even a meaningful explanation could be not effectively received or understood by the data subjects (due to its technical nature or to the limited attention, interests, or—even temporary—cognitive capabilities of the data subject). (25)
To overcome the above-mentioned limits of AI explanation, a possible solution might be to look at the broader picture of the GDPR. Article 22(3) and recital 71, when mentioning the possible measures to make automated decisions more accountable, do not address only the right to an individual explanation, but several other complementary tools (e.g., the right to contest and the right to human involvement and algorithmic auditing). In particular, there are several principles and concepts that might influence the interpretation of accountability duties also in case of algorithmic decision-making: the fairness principle (Article 5(1)(a)), the lawfulness principle (Article 5(1)(a)), the accuracy principle (Article 5(1)(d)), the risk-based approach (Articles 24, 25, 35), and the data protection impact assessment model (Article 35).
Also looking at these last provisions, a
This justification process will be addressed in the next section. However, at this moment we can already affirm that justification and explanation are not necessarily in conflict with each other. When explanations are not satisfactory or feasible, the data controller should implement some complimentary accountability tools.
(27) In a previous paper, the author and a co-author proposed to disclose meaningful information about a Data Protection Impact Assessment (DPIA) on the algorithmic decision-making system—the DPIA, as mentioned in Article 35 of the GDPR, is a process to assess and mitigate the impact of data processing operations on fundamental rights and freedoms of data subjects.
(28) This paper, in addition to that proposal, introduces a practical description of a possible
Before describing the practicalities of a possible justification model and before exploring the advantages of this approach, it is useful to understand what justification means in general as well as in the legal field, particularly with regard to data protection. In general terms, a justification is an action to prove or show something (a person, an action, opinion, etc.) to be
While the explanation, as mentioned above, aims to make humans understand why a decision was taken, a justification aims at convincing that that decision is “just” or “right” (following the different benchmarks of rightness in different fields).
(33) In different terms, while explanations are
The proof can follow logical reasoning standards, while the “norm” depends on the specific context at issue. As shown above, the norm can be based on theological, philosophical (utilitarian, deontological, etc.), scientific (scientific method) and, of course, legal grounds. Indeed, in legal terms, justification means proving that a certain action or act respects the current law and, more generally, the
Actually, as Loi and colleagues argue, (36) the two-dimensional justification that we mention above (norm and proof) should be of a hybrid nature. In particular, the norms also can be from different sources (e.g., utilitarian and legal): a decision-maker can justify a decision on her “primary goals” based on utilitarian norms, (i.e., business objectives), but she also is asked to justify her decision on “constraining goals” imposed by law and, thus, based on legal norms (or other ethical values), such as privacy, fairness, and so forth. (37) Justifying a decision on the primary goals aims to show that the decision is not morally arbitrary, while justifying it on the constraining goals aims to prove the legality of that decision.
Returning to the notion of
In sum, while an explanation tends to clarify only why a decision was taken (on which “primary goals,” and on which practical interests and needs it was taken), (43) a “legalistic” justification usually tends to focus on the mere written law, without a contextual consideration of the balance of interests.
Both these approaches appear incomplete to our purposes (justification of algorithmic decisions). A desirable justification should not merely show the compliance with the “law,” but with the
In the GDPR we observe several references to justification of data processing in general, and of automated decision-making in particular. In different parts of the GDPR, when there is a prohibition (e.g., the prohibition to repurpose the data processing, as stated in Article 5(1)(b); the prohibition to process sensitive data, as stated in Article 9(1); the prohibition to conduct automated decision-making, as stated in Article 22(1); the prohibition of transferring data outside the European Union, as mentioned in Article 44, etc.), there is always a list of exceptions, often accompanied by some safeguards to protect fundamental rights and freedoms of the data subject. This combination of exceptions and safeguards is the basis of what we can consider a
We might observe another strong example of justification in the GDPR: it is the case of high-risk data processing (Article 35). Under the Data Protection Impact Assessment (DPIA) model, data controllers must prove the legal proportionality and necessity of the data processing, and thus the legal necessity and proportionality of eventual automated decisions taken (Article 35(7)(d)). This may constitute a form of
In addition, the Article 29 Working Party Guidelines on profiling recommend that data controllers (in order to comply with Articles 13–15) explain the pertinence of categories of data used and the relevance of the profiling mechanism. (46) Assessing whether the data used are pertinent and the profile is relevant for a decision, as well as assessing the necessity and proportionality of the data processing in an automated decision-making system, seems to constitute a call for justification. The purpose of such assessment is not only transparency about the technology and its processes, but a justification about the lawfulness, fairness, necessity, accuracy, and legitimacy of certain automated decisions. (47)
Interestingly, empirical research revealed that justification of algorithms (defined as showing the fairness of goals and rationales behind each step in the decision) is the most effective type of explanation in changing users’ attitudes toward the system. (48)
While some scholars have already addressed the need for justification of automated decision-making (rather than a mere need for explanation), very few authors have tried to clarify
In recent years, scholars have called for fair algorithms,
(49) or accountable algorithms,
(50) or transparent algorithmic decisions,
(51) or, again, for lawful, accurate, and integrous automated decisions. Justifying ADM means calling for algorithmic decision processes that prove to have
Interestingly, the principles of data protection seem to lead to the desirable characteristics of automated decision-making, as mentioned above. We will now analyse them one by one, contextualising them to the case of algorithmic decision-making.
Article 5(1)(a) refers to lawfulness, transparency, and fairness. As regards to
As regards to
As regards to
Article 5(1)(b) refers, then, to
Article 5(1)(c) mentions the principle of
Article 5(1)(d) refers to data
Article 5(1)(e) mentions the principle of
Article 5(1)(f) mentions the principle of
The last principle in Article 5 is
On the other hand, the methodological perspective of accountability indicates
After having explained what the ADM should be, in terms of content and approach, this section is going to present a possible practical example of a justification test (and the related justification statement for data subjects and Data Protection Authorities). In a previous paper, the author and a co-author proposed a “legibility test” for automated decisions under the GDPR.
(81) Other scholars have proposed an Algorithmic Impact Statement
(82) for accountable decision-making on personal data. This paper, with the aim to include those different experiences in a broader perspective, proposes an algorithmic
Such a justification test might act as an initial framework for conducting Algorithmic Auditing under a legal (and not merely technical) perspective. This test also might be the basis for conducting DPIA on automated decision-making, in particular for what concerns the vague reference to the “assessment of the necessity and proportionality of the processing operations in relation to the purposes” (Article 35(7)(b)). The WP29 Guidelines on DPIA have proposed some criteria for an acceptable DPIA. These guidelines explain that the “assessment of the necessity and proportionality of the processing operations” also implies a good and comprehensive implementation of —inter alia—data protection principles in Article 5 (namely the lawfulness, purpose limitation, data minimisation, and storage limitation principles). (83)
Following the structure of Article 5 and the framework discussed in the previous section, a possible ADM Does the ADM data processing have a lawful basis under Article 6(1)? Is the ADM based on or produce special categories of data?
If yes, does the ADM data processing have a lawful basis under Article 9(2)? Is the ADM based on one exception in Article 22(2)? Is the ADM equipped with suitable safeguards as required in Article 22(3) and with the safeguards eventually required by Member States legislation? ADM Is the ADM nondiscriminatory? How can the controller ensure the nondiscriminatory nature of the ADM result? Does the controller employ anti-discriminatory auditing on a regular basis (e.g., on a statistical basis)? Is the ADM nonmanipulative? How can the controller ensure the nonmanipulative nature of the ADM result? Does the controller employ anti-manipulative auditing on a regular basis (e.g., on a statistical basis)? Does the ADM exploit individual vulnerabilities through excessive power imbalance? How can she ensure that this does not happen? ADM Has the controller provided meaningful information about the logic, significance, and the envisaged consequences of the ADM? Is the controller ready to provide a clear general, group-based, or individual explanation to enable the data subject to challenge the decision? ADM Is the purpose of the decision-making processing licit, clearly determined, and declared to the subject? Is the ADM processing based on data collected solely for that declared purpose? How does the data controller ensure that re-purposing of data is avoided in that ADM system? Was the ADM developed for other different purposes? Was the ADM eventually trained on data collected for other purposes? If the answer to points d) and/or e) is yes, how does the controller ensure that the ADM processing has been adjusted in order to avoid biases? ADM Is the ADM based solely on data that are adequate and necessary for the declared purpose of the automated decision? Does the ADM system produce decisions that are strictly necessary for the declared purpose and the context? ADM Is the ADM based on accurate data? Does the ADM produce accurate results? How can the data controller ensure that points a and b are respected (e.g., on a regular basis)? ADM Is the ADM based solely on data that are still necessary (e.g., not outdated data) for the purpose and context and the decision? Is the ADM processing based on algorithms that are still necessary for the declared purposes? ADM Is the ADM based on integrous data? Is the ADM processing resilient enough to protect digital and physical safety of data subjects? Are cybersecurity risks adequately assessed and mitigated? ADM Are all data protection safeguards and rights (related to ADM) adequately implemented? Are these rights made effective? For example, is the right to challenge the decision enabled by clear explanation to the subject? Are there organisational steps to “put a human in the loop”? Are there organisational steps to comply with a challenge request?
In recent years, legal scholars and computer scientists have discussed widely
This might be possible through a practical “justification” process and statement through which the data controller
After an overview of the GDPR rules (Section 2) and of the definition and limits of AI explanations (Section 3), this article has proposed a wider systemic approach to algorithmic accountability (Section 4). In order to do that, the concept of justification is introduced and analysed both in general (Section 5), and in the legal and data protection fields (Sections 6 and 7). This article argues that the justificatory approach is already required by the GDPR rules. Accordingly, Sections 8 and 9 explain why and how the data protection principles could be a meaningful basis to justify ADM systems.
For sensitive data we refer to “special categories of personal data” according to Article 9(1). This exemption does not apply in case of point (a) or (g) of Article 9(2) (i.e., sensitive data given with explicit consent of data subject or processing necessary for reason of substantial public interest) when “suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place.”
Bryce Goodman and Seth Flaxman, 2016. “EU Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’” arXiv:1606.08813 [cs, stat]
See, e.g., Lilian Edwards and Michael Veale, “Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For,” 16
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation,”
Andrew D. Selbst and Julia Powles, “Meaningful Information and the Right to Explanation,”
Article 29 Working Party, “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679,” WP251rev.01, adopted on 3 October 2017, as last revised and adopted on 6 February 2018; Lilian Edwards and Michael Veale, “Slave to the Algorithm?,” (2017).
Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR,”
See Richard Mortier and others, ‘Human-Data Interaction’ in The Interaction Design Foundation (ed),
Gianclaudio Malgieri and Giovanni Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation,”
Lilian Edwards and Michael Veale, “Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For,” 16
Kaminski, “The Right to Explanation, Explained,” (2019).
Margot Kaminski and Gianclaudio Malgieri, “Multi-Layered Explanation from Algorithmic Impact Assessments in the GDPR,”
See e.g., in general, Andrew D. Selbst and Solon Barocas, “The Intuitive Appeal of Explainable Machines,” 87
Tim Miller, “Explanation in Artificial Intelligence: Insights from the Social Sciences,” 267
Lexico, “Explain,”
Clément Henin and Daniel Le Métayer, ‘A Framework to Contest and Justify Algorithmic Decisions’ [2021]
Miller (n 14) 4.
Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR,”
Aulis Aarnio,
Id..
Henin and Le Métayer, “A Multi-Layered Approach for Interactive Black-Box Explanations,” 38.
See Ronan Hamon and others, ‘Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario’,
Edwards and Veale, (n 3); Lilian Edwards and Michael Veale, “Enslaving the Algorithm: From a ‘Right to an Explanation’ to a ‘Right to Better Decisions’?” 16
See e.g., Gianclaudio Malgieri and Jedrzej Niklas, “The Vulnerable Data Subject,” 37
Sandra Wachter and Brent Mittelstadt, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI,” Columbia Business Law Review 2 (2019) available at
Edwards and Veale, “Enslaving the Algorithm,” (n 24).
Kaminski and Malgieri, (n 12).
Lexico, “Justification,”
Lexico, “Justification,”
See, in general, Larry Alexander and Michael Moore, “Deontological Ethics” in Edward N. Zalta (ed.),
Paul K Moser, “Justification in the Natural Sciences”
Or Biran and Courtenay V. Cotton, “Explanation and Justification in Machine Learning: A Survey,” /paper/Explanation-and-Justification-in-Machine-Learning-%3A-Biran-Cotton/02e2e79a77d8aabc1af1900ac80ceebac20abde4, accessed 26 November 2020.
Clément Henin and Daniel Le Métayer, ‘A Framework to Contest and Justify Algorithmic Decisions’ [2021]
Aarnio, (n 19) 8; Mireille Hildebrandt,
Michele Loi, Andrea Ferrario, and Eleonora Viganò, “Transparency as Design Publicity: Explaining and Justifying Inscrutable Algorithms” in
Id., at 8.
See, in general, Aarnio, (n 19); Arno R. Lodder,
Aarnio, (n 19) 6.
Id., at 8.
Lodder, (n 38).
J. C. Smith,
See also Kiel Brennan-Marquez, “ ‘Plausible Cause’: Explanatory Standards in the Age of Powerful Machines,” 70
On the link between legality and justification, see Hildebrandt, (n 35) 267.
Dariusz Kloza et al., “Data Protection Impact Assessment in the European Union: Developing a Template for a Report from the Assessment Process,” (LawArXiv 2020) DPiaLab Policy Brief 29 available at
See Article 29 Working Party, Opinion on Automated Individual Decision-Making, Annex, p. 30.
Kaminski and Malgieri (n 12) 8.
Biran and Cotton (n 33); Kaminski (n 26) 1548; Tom R. Tyler, “Procedural Justice, Legitimacy, and the Effective Rule of Law,”
Future of Privacy Forum, “Unfairness by Algorithm: Distilling the Harms of Automated Decision-Making,” (2017) available at
Joshua Kroll et al., “Accountable Algorithms,”
Bruno Lepri et al., “Fair, Transparent, and Accountable Algorithmic Decision-Making Processes,”
Here
WP29, Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, 23: “Article 22(1) sets out a general prohibition on solely automated individual decision-making with legal or similarly significant effects, as described above. This means that the controller should not undertake the processing described in Article 22(1) unless one of the following Article 22(2) exceptions applies (. . .).” See Michael Veale and Lilian Edwards, “Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling,”
See, e.g., Smith (n 42).
Damian Clifford and Jef Ausloos, “Data Protection and the Role of Fairness,”
See, e.g., Information Commissioner's Office, Big Data, Artificial Intelligence, Machine Learning and Data Protection, 2017, 22. See also Michael Butterworth, “The ICO and Artificial Intelligence: The Role of Fairness in the GDPR Framework,”
See, in particular, European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics, and related technologies (2020/2012(INL)), Annex I, Article 5.
Malgieri and Comandé (n 9).
Kaminski and Malgieri (n 12).
Margot E. Kaminski and Gianclaudio Malgieri, “Algorithmic Impact Assessments under the GDPR: Producing Multi-Layered Explanations,” 19–28 U of Colorado Law Legal Studies Research Paper available at
Karthikeyan Natesan Ramamurthy et al., “Model Agnostic Multilevel Explanations,” available at
See the importance of purposes in the legibility test proposed in Malgieri and Comandé (n 9) 259–60.
Jonida Milaj, “Privacy, Surveillance, and the Proportionality Principle: The Need for a Method of Assessing Privacy Implications of Technologies Used for Surveillance,”
Pauline T. Kim, “Data-Driven Discrimination at Work,” 58 Wm. & Mary L. Rev. (2017), 857: 884.<<Au: Pls verify citation: 857 is the first page, 884 is the page I want to refer to>>
See this definition in Solon Barocas and Andrew D. Selbst, “Big Data's Disparate Impact,”
Sandra Wachter, “Affinity Profiling and Discrimination by Association in Online Behavioural Advertising,” (Social Science Research Network 2019) SSRN Scholarly Paper ID 3388639
Italics added.
Kroll et al., (n 50) 684. See the importance of accuracy (compared to interpretability) in Cynthia Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead,”
Article 29 Working Party, Opinion on Profiling, 23.
See Europan Bank Authority, EBA Report on Big Data and Advanced Analytics, Annex I. See also, more specifically, Benjamin T. Hazen et al., “Data Quality for Data Science, Predictive Analytics, and Big Data in Supply Chain Management: An Introduction to the Problem and Suggestions for Research and Applications,”
Julia Dressel and Hany Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism,”
Theo Araujo et al., “In AI We Trust? Perceptions about Automated Decision-Making by Artificial Intelligence,”
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics, and related technologies (2020/2012(INL)), § 26: “Stresses that the protection of networks of interconnected AI and robotics is important and strong measures must be taken to prevent security breaches, data leaks, data poisoning, cyber-attacks and the misuse of personal data, (. . .); calls on the Commission and Member States to ensure that Union values and respect for fundamental rights are observed at all times when developing and deploying AI technology in order to ensure the security and resilience of the Union's digital infrastructure.”
European Parliament resolution of 20 October 2020, Annex I, Article 8(1)(a).
See, e.g., Bryce Goodman, “A Step Towards Accountable Algorithms?: Algorithmic Discrimination and the European Union General Data Protection” (2016); Dillon Reisman et al., “Algorithm Impact Assessment: A Practical Framework for Public Agency Accountability,” (AI Now Institute: 2018); Sonia K. Katyal, “Private Accountability in the Age of Artificial Intelligence,”
See, e.g., Antoni Roig, “Safeguards for the Right Not to Be Subject to a Decision Based Solely on Automated Processing (Article 22 GDPR),”
See Article 29 Working Party, Opinion on Profiling, 27, arguing, e.g., that “The data subject will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis.” See also Margot Kaminski and Gianclaudio Malgieri (n 12) 4.
See, e.g., the UK example of “procedural safeguards” about contestations in Gianclaudio Malgieri, “Automated Decision-Making in the EU Member States: The Right to Explanation and Other ‘Suitable Safeguards’ in the National Legislations,”
Information Commissioner's Officer, “Accountability and Governance,” (1 October 2020)
Raluca Oprişiu, “Reversal of ‘the Burden of Proof’ in Data Protection | Lexology,” available at
Malgieri and Comandé (n 9) 264.
Andrew D. Selbst, “Disparate Impact in Big Data Policing,”
Article 29 Working Party, Guidelines on DPIA, Annex II, 21.