Accès libre

Explainability of Artificial Intelligence Models: Technical Foundations and Legal Principles

À propos de cet article

Citez

The now prevalent use of Artificial Intelligence (AI) and specifically machine learning driven models to automate the making of decisions raises novel legal issues. One issue of particular importance arises when the rationale for the automated decision is not readily determinable or traceable by virtue of the complexity of the model used: How can such a decision be legally assessed and substantiated? How can any potential legal liability for a “wrong” decision be properly determined? These questions are being explored by organizations and governments around the world.

A key informant to any analysis in these cases is the extent to which the model in question is “explainable”.

This paper seeks to provide (1) an introductory overview of the technical components of machine learning models in a manner consumable by someone without a computer science or mathematics background, (2) a summary of the Canadian and Vietnamese response to the explainability challenge so far, (3) an analysis of what an ”explanation” is in the scientific and legal domains, and (4) a preliminary legal framework for analyzing the sufficiency of explanation of a particular model and its prediction(s).

eISSN:
2719-3004
Langue:
Anglais
Périodicité:
2 fois par an
Sujets de la revue:
Law, International Law, Foreign Law, Comparative Law, other, Commercial Law, Labor Law, Public Law