In the beginning it was Cambridge Analytica…
In December 2012, a smiling Bono prophesied "Big Data will save politics" on the cover of a famous technology magazine. The illusion was that direct access to information through social networks would make it more difficult for politicians to manipulate voters' thinking. In 2018 the same magazine came with a concerned headline: "Why AI is a threat to democracy – and what we can do to stop it." What happened and what could happen?
In the beginning it was Cambridge Analytica which, thanks to a Machine Learning model, was able to deduce the psychological profile of an individual from his/her activity on Facebook. With this model, Cambridge Analytica influenced political elections in dozens of countries, showing different kinds of ads for the same political issue, personalized on the basis of the psychological profile of individual users. For example, with differentiated ads supporting the right to bear arms in the US, emphasizing either the need for security to the neurotic crowd and of the respect of custom and tradition to conservative voters. Cambridge Analytica was founded by Robert Mercer, billionaire and major donor to the Republican party and the far-right site Breitbart, on whose board sat Steve Bannon, Trump's strategic adviser for some time and sponsor of Italian politicians.
Anyone who studies marketing knows that to persuade an individual to buy a product, you need to make him/her believe that the product is popular. Similarly, to push a voter to embrace a political idea, it is necessary to convince him that a large proportion of the population already shares that idea, repeatedly bringing him into contact with other individuals with the same idea. On social networks, this pressure is exerted by bots: applications programmed to automatically perform certain tasks without human interaction. The purpose of bots is to amplify a certain message to slowly but systematically orient public opinion in a specific direction. Research has repeatedly identified networks of bots controlled by a single puppeteer that are activated around politically relevant events. For example, a network of bots was identified to be very active before the US election of 2016, which went silent until three days before the 2017 Macron-LePen ballot, when it suddenly became active, to amplify the #MacronLeaks (disinformation) campaign – about alleged scandals embroiling Macron – and then went again dormant for months.
The latest developments in the field of Artificial Intelligence have made it very difficult to identify, and therefore counteract, such bots. For example, until a few years ago, bots used faces of real people, stolen from photographs on the web, as their profile picture. A simple Google search identified the theft and the bot. Today, generative artificial intelligence makes it possible to create non-existent but completely realistic human faces, making it impossible to identify a bot from its profile photo. Also, before these bots were limited to sharing messages from their creator, since it was difficult to create original ones. It was therefore relatively easy to understand that you had stumbled upon a bot. Unfortunately, current models of artificial intelligence are now able to think strategically and express their thoughts convincingly. A recent study has shown that messages written by Artificial Intelligence are as persuasive as those written by humans. It is easy to imagine the consequences of this technology in the hands of those interested in destabilizing democracies.