
From Ethics to Law: Who Will Govern the Digital Age?
The expansion of artificial intelligence has raised crucial questions about the need to combine technological innovation, protection of fundamental rights and ethical development. While on the one hand the international community has resorted to declarations of ethical principles to govern this revolution, on the other hand there is an urgent need to codify these principles in binding legal norms. The coexistence of ethics and regulation, therefore, is not only a desirable goal, but a necessary step to address the challenges posed by AI in an effective and sustainable way.
The first responses to the challenge of regulating AI have been provided through the drafting of ethical declarations and digital charters of rights. Among these, the Rome Declaration on the Ethics of AI and the Charter of Fundamental Digital Rights, to which academics, experts and policy makers have contributed, represent an attempt to steer the public debate towards a common vision of the ethical implications of technological development.
Although these declarations constitute an important step forward in acknowledging the need for AI governance, they have often proved to be limited in its practical effect. Their non-binding nature leaves a wide margin of discretion to market actors and poses the risk of fragmented and insufficient regulation.
In this context, a fundamental issue emerges: although ethics represents an indispensable framework, it cannot replace the law. Ethical declarations provide ideal guidelines and objectives, but it is only through clear and enforceable legal provisions that an effective balance between technological development and the protection of rights can be ensured.
The European Union has already embarked on this path with regulatory proposals such as the Artificial Intelligence Act (AI Act), which aims to create a common legal framework for the use and development of AI.
The AI Act, inspired by a risk-based approach, represents a paradigmatic example of how ethical principles can be translated into concrete rules. It is not a simple transposition exercise, but a transformation process in which ethical principles are expressed as specific legal obligations, calibrated according to the level of risk associated with different AI systems.
The regulation of AI cannot ignore the ongoing conversation dialogue between ethics and law. The highly innovative and dynamic nature of AI requires flexible regulation, capable of rapidly adapting to technological changes. In this context, ethical codes can play a complementary role to legal rules, offering a framework of principles capable of guiding the actions of various actor beyond legal obligations.
Co-regulation, understood as collaboration between public and private actors, represents an effective model for balancing the needs of innovation and protection. This approach allows for the integration of ethical principles into dynamic regulatory frameworks, strengthening the legitimacy and effectiveness of the rules.
A significant example of co-regulation is represented by the voluntary codes of conduct provided for by the Digital Services Act (DSA). These codes, developed in collaboration with digital platforms, aim to mitigate systemic risks, such as disinformation, and to ensure greater responsibility of operators with respect to harmful but not illegal content. Although compliance is voluntary, the DSA recognizes that these codes represent an essential tool for effective digital regulation, also enabling enforcement powers by the European regulator.
The European Union, with its approach based on risk regulation and co-regulation, has traced a viable path for responsible AI governance. This path, however, requires a constant commitment to ensuring that legal norms reflect shared ethical values, without stifling innovation but guiding it towards sustainable development that respects human rights.