Contacts
Opinions

Artificial Intelligence and Social Media

, by Elisa Bertolini - assistant professor presso il Dipartimento di studi giuridici
Facebook is implementing an algorithm to prevent suicidal behavior and thus transform the social network into a force for the good. But what if it were just a move to circumvent EU privacy laws?



The ambivalence of the Internet as a force for the good, but also as a force for evil, is posing a challenge for social media companies, and more in general for the whole media industry, which have to deal with the global problem of the transmission of dangerous content. The theme is particularly sensitive when it comes to the double relation associating Internet use with suicides. The web conveys content that can lead one to commit suicide (cyberbullying, shaming, revenge porn etc.), but it can also enable, through a targeted use of Artificial Intelligence (AI), the early identification and prevention of suicidal tendencies.

Firstly, there is the problem of how to remove dangerous content. Recent news provide a case in point: the obscuring of a pro-ana blog run by a 19-year-old from Porto Recanati, in the Marches region. Is this really an effective answer? The net has about 300,000 sites that instigate anorexia and bulimia (pro-ana and pro-mia, respectively) and it seems hard to envision shutting all of them down, also considering the difficulty of identifying suspicious content, when it is conveyed through social media (which are not publishers and therefore do not control content) or instant messaging.

Furthermore, what is the legal basis for such actions? Despite three Italian legislative decrees (in 2008, 2010, and 2014), the instigation of anorexia and/or bulimia is not a crime per se; thus police authorities have to resort to the accusation of instigation to suicide when they ask a judge to censor a site. Moreover, it is not so automatic to consider such networks as dangerous forms of instigation. Since the bloggers are themselves suffering from the very disease they instigate, psychologists and nutritionists don't think that punishing those who run these sites would act as a disincentive for others. Actually, these could even be counterproductive with respect to the avowed objective of fighting anorexia and eating disorders.

Secondly, there is the issue of the purportedly saving role of the social network. Facebook has decided to counter the diffusion of suicidal behavior, not by monitoring and removing malicious content being posted, but by trying to identify suicidal intentions in user's posts through AI and particular algorithms. These are in fact capable of recognizing situations that are cause of alarm on the basis of specific keywords posted by the user and their close contacts (for the latter, it is about offers of help or expressions of concern).

The next step involves automatic reporting to a team of specialists who, if they deem it necessary, will contact paramedics and specialist medical help. The system has already been positively tested in the USA; it's unclear whether its implementation is feasible in the European Union, because of the stricter regulation of privacy rights. Facebook's monitoring system does not contemplate the possibility of opt-out, and profiles users on the basis of sensitive data, both no-nos for EU law. A possible compromise could be that of asking European users for their prior consent and allowing them opt out from the service, even if in the rest of the world this choice would continue to remain unavailable to users. The well-founded fear is that Facebook may want to exploit the social utility of this service as a Trojan horse to circumvent the restrictive European privacy regulations biting into its business model. Without diminishing the importance of this virtuous use of AI, we cannot help wondering why AI is not used instead more massively to strengthen controls on those who have access to the platform, counteract hate speech, shaming, and other forms of cyberbullying. The question, for the moment, remains unanswered.