The problem of value alignment in the context of AI studies is becoming more and more acute. This article deals with the basic questions concerning the system of human values corresponding to what we would like digital minds to be capable of. It has been suggested that as long as humans cannot agree on a universal system of values in the positive sense, we might be able to agree on what has to be avoided. The article argues that while we may follow this suggestion, we still need to keep the positive approach in focus as well. A holistic solution to the value alignment problem is not in sight and there might possibly never be a final solution. Currently, we are facing an era of endless adjustment of digital minds to biological ones. The biggest challenge is to keep humans in control of this adjustment. Here the responsibility lies with the humans. Human minds might not be able to fix the capacity of digital minds. The philosophical analysis shows that the key concept when dealing with this issue is value plurality. It may well be that we have to redefine our understanding of rationality in order to successfully deal with the value alignment problem. The article discusses an option to elaborate on the traditional understanding of rationality in the context of AI studies.

Publication timeframe:
2 times per year
Journal Subjects:
Computer Sciences, other, Business and Economics, Political Economics, Law, Social Sciences