Generative AI content under EU DSA and UK Online Safety Act – Technologist

The technology at the forefront of public debate is Generative Artificial Intelligence (GenAI). GenAI is based on AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio and video. The current text of the EU AI Act, based on the political agreement submitted on 9 December 2023 and leaked on 22 January 2024, also addresses transparency obligations for providers and users of GenAI under Art. 52 (1) AI Act.

The use of GenAI has the potential to bring enormous benefits to businesses and individuals: GenAI tools can help with idea generation, content planning, research, review and editing, and ultimately, with content creation. GenAI can be used as standalone software, or as part of another services – for instance through integration into a social media app, an email inbox or into a search engine.

At the same time, from an ethical and societal, as well as from a legal point of view, the potential risks following in the wake of any new technology are as significant as its vast benefits. In connection with GenAI, one of these risks is the potential for the auto-generation of illegal or harmful content. Conversely, GenAI can also be part of the solution to addressing online harms – where it is embedded into content moderation processes to enable automated and efficient removal of illegal and violative content at large scale. So let us look at both sides of the equation: First, what are the risks GenAI poses in the context of creation? Second, what role can it play in the content moderation which platform operators are already intensively engaging in? In that context, we also look at the central requirements for content moderation as they have been freshly stipulated in the EU Digital Services Act (DSA) and the UK Online Safety Act (OSA). And finally: Does the presence of GenAI on both sides of the equation create benefits, or friction? Can the broomsticks effectively control each other? What lies ahead when we consider GenAI as a tool for creation and for moderation of content alike?

Risks of Generative AI related to illegal and harmful content

Harmful and illegal content, from hate speech and disinformation campaigns (fake news) to counterfeit goods and fake social media profiles, has been a frequent phenomenon ever since the creation of the internet. GenAI could be abused by bad actors to amplify such material even further:

  • Generating vast amounts of harmful content: With the help of GenAI, bad actors can create, and then disseminate, abusive forms of content even easier, and with a more misleading quality, than ever before. Text-to-image AI can be abused to create deep fakes, pictures and videos that have been digitally manipulated – by replacing one person’s likeness convincingly with that of another, for instance, or by embedding imagery entirely from scratch. In the UK, there have been disturbing recent news reports of AI-generated child abuse material increasingly being shared amongst school children. All of this could amount to serious criminal activity that our current penal codes often have difficulties dealing with.
  • Creating and spreading misinformation faster: The power of creation which GenAI brings, combined with the omnipresence and speed of the internet means that a new generation of convincing misinformation can be created and disseminated widely and instantly, to potentially devastating effect. The high quality of bogus and misleading content that can be generated with the help of GenAI creates a false perception of truth and fact that poses a significant risk to society’s trust in the media and gives bad actors a powerful tool to influence and steer public debate and opinions to their own ends.
  • Potential for bias: Generative AI models are trained on data collected from the real world, which means that they have the potential to mirror and amplify the biases that exist in society through self-affirming content loops and bubbles

Content Moderation as a key tool to Mitigate Risks of GenAI

Where illegal and harmful content is shared by users of online services, it falls to the service providers to prevent its spread. Corresponding notice-and-takedown mechanisms and content moderation processes have long been a well-established feature of online platforms rich in third-party engagement, in the ongoing efforts to curb the spread of illegal and harmful content which is impossible to avoid in the user-centric ecosystem of the internet.

Over the last year, GenAI has become central to the content moderation challenge across online services of all shapes and sizes – from social media to online marketplaces, app stores, gaming platforms, dating apps, fitness and other community platforms, booking sites, file sharing and cloud hosting services. The debate over how platforms should be required to tackle not only outright illegal content, but also contributions that are “merely” harmful and violate the operator’s terms of service, such as hate speech and fake news, plays an important part in the digital strategies of the EU and UK.

Digital Services Act – The EU content moderation playbook

The Digital Services Act is the EU’s new central regime regulating how online platforms moderate content, ensure transparency of their advertising, and use algorithmic processes for ranking and recommendations. The DSA introduces a set of staggered obligations for all intermediary services, depending on which category they fall into – from mere conduit to hosting, online platform services, B2C ecommerce platforms, and, at the top of the ladder, very large online platforms (VLOPs) with more than 45 million monthly active users in the EU. For these VLOPs, 19 so far (with more in the pipeline), the DSA already applies since August 2023. For all other intermediaries, it becomes applicable on 17 February 2024.

Let us take a look at how the DSA applies to GenAI in various use cases, and which corresponding obligations it imposes:

  1.  
How does the DSA capture GenAI?

Despite its horizontal applicability to a wide range of intermediary services, the DSA’s language isn’t at all a clear-cut fit when applied to GenAI use-cases:

  • No application to stand-alone GenAI: According to Art. 3 (g) DSA, the intermediaries covered by the DSA are conduit, caching and hosting services. While users request information from generative AI models through prompts, which could be seen as a storing of information provided by a user, generative AI models do not follow the traditional model of storing and sharing this input. Instead, the models produce new content derived from user contributions. The DSA fails to adequately address the dissemination resulting from this much more complex process.
  • Limited application to closed groups on messenger services: Closed groups on messenger apps are excluded from a significant share of the DSA obligations, as the content shared in their messages is usually not distributed to the general public. At the same time,  the line between social media platforms, messenger groups and channels is blurred, given that the latter also allow for easy spreading of content to a vast number of users – AI-generated and enhanced content included.
  • Application to platforms with integrated GenAI models: While the applicability of the DSA to stand-alone GenAI is questionable, AI solutions can of course be an integrated component of intermediary services which are regulated under the DSA, such as social media, gaming platforms, cloud hosting services, marketplaces, and search engines – where the use of GenAI is currently most visibly in evidence. The resulting content disseminated to the public can in those cases originate directly from users, or have been modified by an integrated AI tool – either at the initiative of the user itself, or by the platform operator. What does that mean for the resulting liability regime? Does the implementation of AI functionalities that can modify user content result in an active role of the platform, putting its hosting privilege at risk? Again, the DSA fails to address such hybrid forms of use, and we will need to look for future guidance from case-law and national Digital Services Coordinators.
Resulting obligations

To the extent the DSA does apply to content generated with the help of AI, its most relevant content moderation obligations are the following:

  • Notice and action: Art. 16 DSA et seq. proscribe detailed processes for reporting potentially illegal content, including obligations on communications with the notifier and the content uploader, substantively reasoned decision-making and swift removal of any content confirmed illegal.
  • Appeals system: Providers must both offer an internal complaints handling system, Art. 20 DSA, and participate in out-of-court dispute resolution, Art. 21 DSA.
  • Special cases: While notices from certified trusted flaggers are to be handled with priority, Art 22 DSA, the DSA also requires action against abusive users who either repeatedly upload illegal content or file false infringement reports, Art. 23 DSA.

AI as a Content Moderation Tool and the resulting UK OSA obligations

Use of AI in the context of content creation and moderation is part both of the challenge and of the solution: Online services increasingly rely on AI systems for moderating the vast amounts of third party content they host. Automated content filters and review algorithms can be employed to identify inappropriate material highly efficiently. GenAI, in particular, is able to identify reasons  scan the service to identify illegal or harmful content (referred to by the OSA as “proactive technology”). The OSA’s definition of said proactive technology (in section 231(10)) makes clear that this is intended to cover “technology which utilises artificial intelligence or machine learning”. The OSA also imposes obligations on service providers to include “clear and accessible provisions” in their terms giving information about any use of proactive technology to comply with the OSA’s duties, as well as requirements to allow users to complain about the way proactive technology has been used by the service in content moderation.

In its draft codes of practice on the OSA’s illegal content duties, published in November 2023, Ofcom specified that, in order to comply with the code (and thereby demonstrate compliance with the OSA’s core duties to tackle illegal content), certain services would need to take automated content moderation measures, such as the use of “hash matching” technology to proactively detect and remove child sexual abuse material.

Therefore, as with the DSA, AI will continue to exacerbate the problem of illegal and harmful content the OSA seeks to address, but will also be part of the solution deployed by online services to meet their new regulatory obligations.

Outlook

GenAI is changing – challenging, facilitating, accelerating, standardising – how we communicate, create, and work in many aspects of our lives. Where user-generated content is concerned, GenAI is part of both the problem and the solution. Going forward, it will be imperative that the positive impact of GenAI on content moderation solutions is perceived as stronger than its contribution to the creation and dissemination of illegal and harmful content. User trust is an essential condition of our online ecosystem, and no online business model can operate sustainably without it.

In that context, content moderation is essential for creating a safe, predictable, and trustworthy online environment. Amidst all the excitement about the EU AI Act, the specific regulation of illegal content and online harms under the DSA and the OSA seek to factor in automation and use of AI both as part of content creation and of content moderation – but deficiencies are already apparent in both these brand new regimes. One thing is certain: Our understanding of the current generation of GenAI systems, and of their potential, is still very much incomplete and growing. Robust yet flexible, and thus future-proof, regulation is crucial at this juncture, where technological progress is moving constantly and evolving fast.

Authored by Anthonia Ghalamkarizadeh, Telha Arshad, Jasper Siems

Add a Comment

Your email address will not be published. Required fields are marked *