News

Interview with Dr Carsten Ullrich – 2021 Rolf Tarrach Prize recipient

  • Faculté de Droit, d'Économie et de Finance (FDEF)
    09 juillet 2021

Every year, the Association “Amis de l’Université” bestows a special Prize, the Rolf Tarrach Prize, rewarding the best doctroal thesis of the University of Luxembourg.

This year, the Prize has been awarded to Dr Carsten Ullrich, member of the Faculty of Law, Economics and Finance (FDEF) for his thesis on the topic of “”.Unlawful Content Online: Towards A New Regulatory Framework For Online Platforms

On the occasion of the Award Ceremony held on 8 July, Dr Ullrich relates to the topics and challenging issues tackled by his thesis, focusing on the legal implications of online marketplaces and media networks when confronted with illegal content.

Dr. Ullrich, can you tell us a few words about your impressive international career path before joining the University of Luxembourg?

I have worked and studied in several places in Europe over my career, and really started to look into the sector of internet regulation after studying in London. Probably the constants in my career have always been technology and technology regulation. After graduating, my first job and first “brush” with regulation of the internet at the end of 1990s was as a market researcher for a company that created market intelligence reports for one of the first online business information providers. Then I moved back to Berlin where I joined the British embassy and really took over lobbying and economic and regulatory analysis on IT policy and telecoms deregulation at the time. I was indeed already immersed with issues of broadband internet rollout and internet governance. I then joined the Canadian embassy in Berlin, where this topic was really getting off the ground. Later on, I accepted a position here in Luxembourg in 2006 at Amazon, which was really the game changer with regard to my exposure on a practical level into the area of internet and e-commerce regulation.

I worked as a regulatory compliance manager and fraud detection manager for a number of years until I decided to take a step back and treat some of the issues that I’ve seen at work on a more theoretical level. I was very lucky to actually have the University of Luxembourg in front of the door, literally two hundred meters away from the office of Amazon. I had learned about the Doctoral Training Unit on Enforcement in Multi-Level Regulatory Systems (DTU REMS), which had one position opening in the area of digital enforcement, in the team of Professor Cole. My research idea had been developing in my mind for some years, but as soon I as joined the University I went to work to make it concrete and start executing it.

I started out actually looking at the issue of counterfeit products on the internet, a very topical area, because there was a huge emphasis on what I call economic rights enforcement with regard to copyright and to goods being sold online which are basically pirated, counterfeit and dangerous. Then my focus widened to include the areas of hate speech, terrorism contents, cyberbullying and defamation.

Hate speech has become a huge policy preoccupation in the EU, with a tremendous impact on users. When we look back, especially given the change in the political environment in the USA in 2016, with the election of Donald Trump, and also with Brexit, we see that policymakers consider that social media influences public debates. Social media are in fact not influencing public debates, but their content management practices steer users to interact in a certain way. The practices of dissemination of content, of sharing, of amplification of certain extremist views, potentially “borderline” are the ones causing risks and harm.

One of the pressing issues at the moment, as seen in the Covid crisis, is how our discourse online changes our debate in the democratic society. There have been some dangers, risks and harms identified that directly concern how online platforms conduct their business. “Conducting business” means how they incite users to share content and how they amplify certain contents mainly based on commercial criteria and aimed at getting additional advertising revenue.

It’s important to reach a global consensus, but first we should provide a European solution to the problem and then go on to the global stage to promote that view. The EU is dealing with platforms that are mostly originating in the US. In my thesis, I refer to harmful practices that affect users in the EU. That is why we have to start within the European context, by shaping standards for European users, in line with our fundamental rights and values, in order to get platforms to behave more responsibly.

Many important social networks have implemented content control measures, meant to identify occurrences such as hate speech, negationist views (Holocaust or others). Are those measure not sufficient? how does this fit into your analysis?

That’s actually one of the central points of my thesis. I am reviewing many different areas, such as hate speech, defamation counterfeiting, product safety. The platforms decide what is unlawful within their own system, which I call “private enforcement”. Most of these private enforcement practices are set up by the platform itself, according to their own terms and conditions. There’s a main question of compliance with legal standards of transparency and consistency that democratically elected Governments have set up. For instance, what is terrorist content? On Facebook, they take down millions of postings per one quarter of terrorist content. But from the 85,000 referrals that are sent from official authorities in the EU over a span of 3 years the platform chooses to not action 15%. There are parallel systems of private and public enforcement. If you let that develop, you will end up in a scenario where these companies make the law with no transparent practices.

When companies like Facebook enforces certain policies, we don’t know what standards they apply. There is evidence that Facebook changes its own content policies very frequently and decides on the go what is being taken down, with commercial objectives playing an overriding role.

In short, these control systems do work but are done mainly with a view of minimising the reputational impact of the platforms and are not so much motivated by compliance with the law. From my own business experiences, there is an obvious prioritisation of take downs according to the commercial importance of the content, or the business or user behind it for the platform. And that is, for me, unacceptable.

How can we change the course of things, in the business sector in particular, how can we really counter those threats?

The E-commerce directive, for instance, was actually enacted in 2000, at a time when the internet was totally different and the responsibilities of platforms very low. As a result, these companies have an exemption from liability for almost any of the content that users post. Therefore, I am advocating for a new system in which these platforms are placed onto a level similar to many other important corporate actors. Instead of saying that they are exempt from the content that others post on their systems and from the related threat, they must become more proactive. When implementing a certain technology feature, such as live streaming, or allowing a enabling a commercial platform where any vendor across the world can sell regulated products (Europe for the matter), there must be awareness that great risks prevail. We have the seen the consequences with regard to live streaming, for example, in the deadly terrorist attack in New Zealand. If users – largely anonymous – are allowed to stream live content almost in an unlimited way, the platform is actually facilitating exposure to disseminating the most odious criminal acts.

My thesis proposes a mandatory risk-management for platforms, based on technical standards, borrowing from existing legislation in the area of money-laundering, but also in the areas of data protection, health and safety or product regulation. Companies must adopt standards that are safe by design, not only by removing illegal content as it appears on the platform, which is a reactive measure, but also by creating platforms whose architecture deters illegal behavior from the outset. A lot of research is done already into creating safe platform architectures and business models. I am proposing to incorporate those measures into a standard and make the most relevant ones mandatory on the lines of what is, for example, being done in EU product legislation or other areas.

Does this approach also involve cybersecurity issues?

Cybersecurity is another very important area in this context. If I assess the safety of my systems from an I.T. security perspective, I can also do that to assess the risk of the platform when facing illegal activity, similarly to fraud detection. I know from my own work experience that in todays’ e-commerce environment there is a huge overlap between security and fraud, for example. We just need to put it together as a customised system for preventing illegal content and activity.

In my proposal, the platform needs to identify issues that can be a risk to us and to public interests, such as, for example, live streaming and anonymity which can have a very risky impact. For example, anonymity in the internet world is very important but can also be hazardous. A platform needs to be able to identify these high risks and define control measures: this would be part of a standard risk management approach in this area.

After having so successfully completed your thesis, do you plan to remain in the realm of the university or reintegrate the private sectors?

First of all, I am very grateful for the education I received and for the privilege of conducting my research at the University of Luxembourg. I am now planning to join a European platform as legal counsel, in order to help implement trust and safety systems and accompany innovation. I hope my experience and research will inspire students and researchers.