Open Access

Balancing risk and social good: proactive law as a strategy in AI governance

  
Apr 02, 2025

Cite
Download Cover

The rapid evolution of artificial intelligence (AI) technologies has transformed numerous sectors, prompting regulators worldwide to establish normative frameworks to ensure the safe, ethical and beneficial deployment of these systems. Over the past two years, various regulatory approaches to the governance of AI have begun to take shape, sparking an ongoing debate. This reveals the limitations of traditional and reactive regulatory approaches in governing evolving technologies, which will fall short in addressing the unique characteristics of AI. However, a shift towards a more promotive direction has become noticeable in recent decades, largely influenced by principles and methodologies resulting from more reflexive public policies, interdisciplinary research, sociotechnical advances and a pragmatism learned from the proactive legal approach endorsed by Nordic legal scholars. This article explores how the proactive law perspective may further enhance AI regulation. It argues that this approach should actively involve stakeholders in regulatory processes to promote collaboration among developers, regulators and the public and ensure that the development and use of AI and other (future) technologies aligns with societal needs and values.