À propos de cet article

Citez

Artificial intelligence systems are currently deployed in many areas of human activity. Such systems are increasingly assigned tasks that involve taking decisions about people or predicting future behaviours. These decisions are commonly regarded as fairer and more objective than those taken by humans, as AI systems are thought to be resistant to such influences as emotions or subjective beliefs. In reality, using such a system does not guarantee either objectivity or fairness. This article describes the phenomenon of bias in AI systems and the role of humans in creating it. The analysis shows that AI systems, even if operating correctly from a technical standpoint, are not guaranteed to take decisions that are more objective than those of a human, but those systems can still be used to reduce social inequalities.

eISSN:
2719-9452
Langues:
Anglais, Polaco
Périodicité:
4 fois par an
Sujets de la revue:
Law, International Law, Foreign Law, Comparative Law, other, European Law, Social Sciences, Political Science