Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles
Published Online: Apr 19, 2023
Page range: 1 - 38
DOI: https://doi.org/10.2478/vjls-2022-0006
Keywords
© 2022 Jake Van Der Laan, published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world.
A key informant to any analysis in these cases is the extent to which the model in question is “explainable”.
This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).