Europe and the Challenge of AI Regulation in the Financial Sector
While the advancement of Artificial Intelligence (AI) promises enormous benefits for society and the economy, it also carries significant risks. These risks, due to the complexity, opacity, and autonomy of AI systems, make it difficult for regulators to ensure compliance with existing laws and to determine liability in the event of failures. This problem is particularly acute in financial markets, where an AI system failure can have disastrous consequences for both individuals and market stability.
The European Union is trying to bridge the gap between the existing legal framework and the evolution of AI. Current legislative tools include the AI Act, which introduces preventive obligations, and the proposed AI Accountability Directive, which aims to create an accountability framework. These regulations aim to ensure that AI systems are human-centered, ethical, explainable, sustainable, and respectful of fundamental rights.
The recent paper by Maria Lillà Montagnani, Marie-Claire Najjar and Antonio Davola, titled “The EU Regulatory approach(es) to AI liability, and its Application to the financial services market” explores these very issues. The authors highlight how AI can bring significant benefits, but also considerable risks, especially in complex sectors such as financial services. They stress the importance of a legal framework that can balance technological innovation with the need to protect the rights of individuals and market stability.
A case in point about the risks of AI in financial markets is that of Tyndaris SAM and VWM Limited. In 2017, Tyndaris had signed an agreement with VWM to operate an account using an AI supercomputer, K1, capable of predicting market sentiment and providing trading signals based on real-time news and social media data. However, the system incurred in significant losses, leading VWM to request a trading suspension and take legal action against Tyndaris. This case, analyzed in the paper, highlights how AI, if not properly supervised, can pose serious risks.
The financial sector is particularly vulnerable to AI because of its dependence on external data and the need to adapt quickly to new inputs. The characteristics of AI, such as the ability to self-learn and the opacity of decision-making processes, make it difficult to assign responsibility for harm. For example, an AI-based credit scoring system may unfairly discriminate against certain groups of people, or a high-frequency trading algorithm may amplify market volatility, with knock-on effects on the economy.
To address these issues, the EU introduced the AI Act, which distinguishes between high- and low-risk AI systems and imposes specific obligations to ensure the transparency, robustness and security of AI systems. In case of violations, severe penalties may be levied to enforce compliance.
In parallel, the proposed AI Liability Directive provides mechanisms to ease the burden of proof for victims of harm caused by AI systems. For example, national courts can order the disclosure of relevant evidence to support liability claims. In addition, legal presumptions are introduced to make it easier to prove the causal link between the use of an AI system and the harm suffered as a result of that use.
These regulations will have a significant impact on financial markets, where the intensive use of AI-based technologies requires strict oversight to prevent systemic damage. AI regulation in the financial sector must therefore be coordinated with existing sector regulations, such as the DORA regulation, which aims to ensure the digital operational resilience of financial entities.
Maria Lillà Montagnani, Professor of Business Law at Bocconi University in Milan, says, “The main challenge for regulators is to strike a balance between technological innovation and the protection of fundamental rights. AI regulation must be strict to ensure security and transparency, but also flexible so as not to stifle innovation.”
In conclusion, the EU's approach to AI regulation, as highlighted in the paper by Montagnani, Najjar and Davola, seeks to balance innovation with consumer protection, creating an environment of trust necessary for AI development. This strategy is crucial to ensuring that AI can benefit society without compromising fundamental rights and economic stability.