Contacts
Opinions

In the Regulatory Race, the EU Is Beating the US

, by Oreste Pollicino - ordinario presso di Dipartimento di studi giuridici
The EU Code of Practice on Disinformation presented last year raises the level of security of digital platforms and strengthens the position of users with a coregulation model. But in homeland of selfregulation, the US, such model could fail to take root

On 16 June 2022 in Brussels, I handed the Vice President of the European Commission Vera Jourova the new European Code of Practice on Disinformation, signed by many actors operating in a plurality of sectors. From civil society, to large platforms, from factcheckers to companies operating in the advertising industry. My role was that of honest broker, facilitator and coordinator of the drafting process.
In any case, it is the first mechanism, at a global level, for co-regulation of the phenomenon of disinformation on the basis of a code of conduct that involves all the main parties involved.
More than a year later, and in light of next year's EU and US elections, there are two fundamental questions that must be answered.
Will the Code serve to mitigate online disinformation in Europe, especially in a very delicate electoral season? Secondly, can it be used as a model for fighting disinformation on the other side of the Atlantic, too?
As for the first question, there is some hope, looking at the paradigm shift that the Code calls for compared to the status quo. We move, as mentioned, from a context of mere self-regulation, in which the web giants write and apply the relevant rules, to a very different one of public-private co-regulation. This concretely means that in the event of failure by the signatories to fulfill their commitments, there will be sanctions from EU institutions. What is the added value of this process in terms of objectives achieved regarding the content of the code? At least three.

First, it raises the level of security - first and foremost on the part of the spaces which are increasingly digital agoras hosted by the web giants - against disinformation techniques, procedures and strategies. Second, it strengthens the position of users, through new tools that are able to both identify false information more easily and mitigate the risk of polluting debate. Third, ensure constant dialogue between platforms and independent factcheckers who are entitled to fair remuneration.
As for the second question, that is, whether and how this European innovation can be useful for injecting doses of heteronomous regulation into the self-regulation prevailing in the United States (concerning the discipline against disinformation, but also artificial intelligence) going against the will of industry operators, the response must be less optimistic.
In the United States there is a rather paradoxical situation regarding the (non-) fight against online disinformation.
On the one hand, there is a certain terror - I would say well-founded, given what happened in the Trump vs Clinton presidential elections - regarding the disinformation professionals who polluted the debate and tried to influence the outcome of the vote. The fear is about the risk of external interference in the 2024 elections (in this regard, there is already a bill in the pipeline in Congress).

On the other hand, there is another terror that hovers and constitutes almost a taboo against any attempt at regulation. That of undermining in any way the foundations of free speech at the basis of the Bill of Rights, which means that even a simple communication from the White House to the large platforms on their commitment to prevent false content from being published online, with respect to sensitive topics such as elections and public health, is considered (as it has recently happened in the Fifth Circuit Court of Appeals) in conflict with the First Amendment.
In conclusion, the US seems to me a very unripe ground for extending the co-regulation model at the basis of the new EU Code of Practice against Disinformation.