Generative AI: A Lot of Hype but Also a Lot of Doubts
Since Open AI presented ChatGPT, public opinion and companies have been hit by a wave of debates, sensationalist slogans, catastrophic proclamations and facile generalizations that have made approaching this already complex phenomenon absolutely confusing.
After an initial phase of understandable enthusiasm for the novelty and performance of generative systems, public opinion was immediately captured by a new 'catastrophic' rhetoric, which sees in AI, especially generative AI, an almost existential threat to humanity. The emblem of this position is the open letter published in early 2023 by the Future of Life Institute, which called for a moratorium on the development of AI systems more powerful than GPT4. In the months that followed, there was no shortage of new outbursts of great enthusiasm, corresponding to the release of new ChatGPT features, the launch of similar products on the market by competitors, and truly enormous rounds of investment raising by AI startups. The storm continues, however, because these moments of overwhelming optimism driven by the narrative of technology vendors, which emphasizes the revolutionary economic and social effects of Generative AI, have been interspersed with other worrying signals. One above all in terms of media reach, is the resignation of Goeffrey Hinton, one of the fathers of modern AI and winner of the Turing Prize, from his position at Google in order to be able to express himself more freely on the social risks that AI poses. Again, there are worrying signs in terms of reactions in the labor market, such as the 118-day strike called by the Hollywood Actors Guild, which, among other issues, obtained a promise of protection against the risks of AI for the category. To these elements we add a detail: it is natural for knowledgeable insiders to be careful in their the use of some terms typical of the AI context such as 'intelligence', 'learning', 'neural' networks, or 'hallucinations', but those who do not work in AI per se may interpret them quite differently. At a time of great expansion of access to many capabilities of AI systems, this risks leading to a further exacerbation of ultra-positive or fearfully skeptical rhetoric.
So, what considerations can we make, to date, on the work impacts of AI to help us navigate such an articulated debate?
A key point is related to a careful reading that must be conducted in the face of the first numbers on the quantitative impacts of the adoption of generative tools on some typical kinds of work. For example, tools that support coding to varying degrees have been tested in different contexts. Although the positive impacts reported by users are widely present, several surveys have also shown that those who use these tools tend, even in the presence of great confidence in their output, to produce code with greater security issues. Similarly, a study conducted by Harvard Business School in partnership with The Boston Consulting Group tested the use of ChatGPT on about 700 business consultants. The results, which were positive at an aggregate level, nevertheless highlighted points of attention in terms of non-homogeneous impacts on different classes of consultants, with those already performing better impacted less, and greater exposure to errors in the case of less well-defined tasks.
As a result, drawing conclusions about the actual impact of new generative AI tools on the world of work, society, and business processes is particularly complex. However, in order to combat the temptation to be carried away by a positive or negative dialectic at any cost, it is essential to interpret in a rational way the first evidence that emerges from the cases of actual adoption, without uncritically espousing one position or another. Only then can we put ourselves in a position to formulate appropriate strategies of accompaniment, response, and possible mitigation of potential negative impacts.