Contacts
Opinions

Technology in times of crisis

, by Chiara Graziani, Research Fellow, Department of Law
The impact on rights and freedoms, institutional consequences and the implications for economic rights: these are the three questions the law must deal with in the case of AI systems in emergency situations such as terrorist threats or pandemics

How is advanced technology being used to stem the adverse effects of emergencies, both the political ones (as international terrorism) and the non-political ones (as the pandemic)? Is the frequent replacement of human decision-makers by "intelligent" algorithms affecting how advanced democracies deal with emergencies and ensure rights protection in times of stress?

These questions, which are at the basis of the Bocconi unit's stream of research within the PRIN excellence project "Decision-Making in the Age of Emergencies: New Paradigms in Recognition and Protection of Rights", arise from three main concerns raised by the use of artificial intelligence (AI) in the course of crises. The first is the impact on rights and freedoms per se (i.e., the risk of their disproportionate limitations); second, its institutional consequences, which indirectly have a spillover on human rights; and third, the relationship between resort to AI and the market, hence the implications on economic rights.

The first aspect is the most obvious: we are all accustomed to think that automated and self-programming systems – not just during emergencies, but in everyday life – may infringe on our rights to privacy, data protection, free speech, given the amount of data (so-called big data) that algorithms need to be fed. Nevertheless, when this happens in times of emergency (let us think of surveillance regimes based on so-called black boxes), even more sensitive issues arise: on the one side, people are prone to accept stronger limitations for the sake of a higher good (e.g., national security or public health), but this could be detrimental for the keeping of the rule of law; on the other hand, in emergencies more than in any other context, these advanced tools are often used in secret, contributing to serious troubles in terms of transparency and accountability.

The institutional consequences concern the progressive and apparently unescapable erosion of public powers that the use of refined technologies entails. Although there are some attempts to regulate these tools – also during emergency – through legally binding sources, their practical implementation is widely left to so-called Big Tech, which, in this way, is significantly involved in public functions (e.g., programming algorithms for surveillance purposes, developing apps for contact tracing, and so on). Undoubtedly, the shift of power given to these entities is likely to impact on the guarantees for rights and freedoms, as these private bodies work according to a different logic with respect to public bodies.

The third aspect is that the more these advanced algorithmic systems are essential to effectively deal with threats (let us think of algorithms that flag potentially "dangerous" content online), the higher the risk is that only big companies will be able to afford them. Thus, smaller enterprises could end up being kicked away from the market, with potential adverse consequences on economic rights, at least if no counter-measures are taken.

The described dynamics are not necessarily evil, as they might be natural features of the development of the digital age, inevitably affecting, among others, the management of emergencies. However, for sure they need to be monitored and watched over, in order not to "scar" the main features of rule-of-law-based systems. In other words, technology runs faster than law, and this is a matter of fact; yet, this should not lead public regulators to totally pull back.