Contacts
Opinions

The fallibility of Artificial Intelligence

, by Oreste Pollicino - ordinario presso di Dipartimento di studi giuridici
ChatGPT and generative AI risk amplifying phenomena like hate speech and online defamation, opening up ethical questions as well as legal ones. There is also the issue of energy consumption

Having a dialogue with an artificial intelligence is the fad of the moment. Colossal investments, surprising results, and also growing concerns have placed technologies such as ChatGPT - generative artificial intelligence - on the crest of a wave of digital enthusiasm.
One of the most significant problems that arise in this regard is the possibility of online disinformation gets further amplified, because access to knowledge could be distorted by texts that are often convincing and persuasive, but not entirely truthful.
An essential reference in legal reasoning is that constituted by fundamental rights which are protected by constitutional charters, which are analog but still very current documents.

In particular, reference is made to the right to be informed, if not truthfully, at least reliably; to the principle of habeas data in its evident digital projection and to the right to fair remuneration for copyright holders.
These initial reflections lead us to argue that, beyond legislation in the pipeline, and in particular the proposed Artificial Intelligence Act, the humanist principle that characterizes the whole structure of Italy's Constitutional Charter and the Charter of Fundamental Rights of the European Union already offers clear orientation, with particular reference to the protection of the dignity of the person, to be protected as an individual and in his/her participation in intermediate social communities, which risk being subjected to an incessant process of fragmentation and disassembly due to the explosion of artificial intelligence.
In addition to the legal questions which, such as those relating to copyright protection mentioned above, are likely fill the days of judges, lawyers and scholars, there are at least two other questions that need to be asked. The first is of an ethical nature, the second is related to the energy problem.

As for the former, the not exceedingly sophisticated automation processes that extract an apparently original and convincing narrative text from the boundless repository of data present on the internet, often accompanied by a feel-good and very diplomatic tone, push many towards the erroneous belief that AI is "self-sufficient", and therefore the human component is completely absent. Nothing could be more false. To arrive that result, as observed in a recent article by Roberto Battiston and Massimo Sideri in the Corriere della sera, there is the work of thousands of "workers" of generative artificial intelligence, who could also be called the "new slaves", if one thinks that, for little more than a dollar a day, often in Asia or Africa, they carry out research and classification work within one of the most disturbing and alienating virtual worlds, that of the dark web, precisely to tame the verbal and visual aggressiveness of the early versions of ChatGPT. It is evident that in this regard many questions of an ethical nature can be raised, not only in connection to the process of exploitation – and, one could add, of veritable alienation to which these individuals are subjected – but also relating to possible scenarios in which, by changing the intentions of those who control the verbal and linguistic framework where the answers to our questions take shape, these could take a provocative, aggressive or even offensive tone, thus returning answers which, to the problem of disinformation, add those of hate speech and online defamation.

The second question, on the other hand, touches on energy profiles, which are extremely relevant in our current, well-known, historical predicament. As recently noted by CNR President Maria Chiara Carrozza, the use of generative artificial intelligence techniques is by no means free. In fact, it has a very significant consumption load and therefore, in some cases, causes energy waste, because, as we said, AI for its functioning requires the activation of a boundless quantity of data together with massive computational power.
We should resist the temptation of being overly pessimistic, however. From the point of view of the theory of language, as noted by Carrozza herself, generative AI constitutes the actual bridge between the humanities and computer science, as it uses a computational linguistics technique that builds a humanistic narrative on the basis of a probabilistic calculation typical of the hard sciences.