AI deployment: German DPAs issue guidance on data protection compliance – Technologist

Background

The question of how AI applications can be developed and used in a manner that complies with applicable data protection law requirements, in particular under the GDPR, has been on the European DPAs’ agenda for some time now – resulting in various guidance, publications and enforcement actions across the EU. This development was even further accelerated by the emergence of Large Language Models (“LLM”) in recent years, which became a focus of the DPAs’ activity with regard to AI and data protection.

In line with this trend, the German DPAs published a number of guidance documents and position papers, beginning with the so-called “Hambach Declaration on Artificial Intelligence” published by the joint body of the German DPAs (Datenschutzkonferenz, “DSK”) as well as the Guidance on technical and organisational measures for the development and operation of AI systems back in 2019. More recently, several DPAs of the German federal states individually published their own guidance both on AI in general, as well as on specific AI related topics (e.g. Flyer and Checklist on GDPR-compliant Artificial Intelligence of the Bavarian DPA, checklist for the use of LLM-based chatbots of the Hamburg DPA, or paper on legal bases in Data Protection for the use of Artificial Intelligence of the DPA of Baden Wuerttemberg). However, there were no uniform guidelines across all German DPAs for companies operating in several federal states in Germany, leaving room for local discrepancies and legal uncertainty.

New DSK Guidance

Following this patchwork of regulatory activity, the German DPAs have now published a joint guidance on AI and data protection (version 1.0, dated 6 May 2024). This guidance puts particular focus on how companies can utilize LLM in a data protection-compliant manner and is thereby primarily addressed to deployers of such AI applications. The German DPAs, however, stress that the guidance nevertheless contains valuable insights also with regard to other types of AI applications as well as for developers of AI, who will have to keep in mind what data protection controls their customers will require.

Summary of Key Findings

In essence, the guidance focuses on three stages of AI utilization – the selection/planning stage, the implementation stage and the production stage in which AI applications are actively used in a company. The following provides a summary of the key requirements that the German DPAs defined for each of the phases:

Stage 1 (Planning Phase): Selecting AI Applications

  • Define use cases and purposes: Controllers should expressly define the use cases and purposes of an AI application. Certain use cases may be prohibited from the outset (e.g. use of AI applications for social scoring that is prohibited under the AI Act).
  • Minimize use of/refrain from using personal data, where possible: Organizations should assess to what extent the processing of personal data takes place within an AI application and whether use of AI applications is possible without using any personal data- This assessment must be carried out in any part of the lifecycle of the data and the AI application.
  • Take into account AI training: When selecting AI applications, controllers should consider whether such AI applications were trained in compliance with data protection laws and are responsible to ensure that errors in the AI training do not affect the data processing under their legal responsibility.
  • Ensure legal basis for the processing: Controllers must ensure that the processing related to the use of AI applications is covered by a legal basis. For this assessment, the DSK refers to the paper on legal bases in Data Protection for the use of Artificial Intelligence of the DPA of Baden Wuerttemberg, as an example for further guidance.
  • Avoid fully automated decision-making: Controllers should ensure that any decisions made on basis of AI applications are not solely based on automated decision-making (as prohibited under Art. 22 (1) GDPR, which is interpreted strictly by the DSK). Instead, involvement of effective human control and oversight must be ensured.
  • Prefer closed AI systems over open AI systems: Closed AI systems should be chosen over open AI systems, if available, as they can only be accessed by a limited number of users and allow for a stricter control over input and output data by users.
  • Ensure transparency: Controllers are obligated to provide data subjects with sufficient information under Art. 12-14 GDPR, which may require requesting applicable documentation from AI developers and processors. In particular, data subjects must be informed about any automated decision-making and profiling taking place, as well as the involved logic, method of the processing and effects for the data subjects in question.
  • Check transparency and opt-out options for AI training: Controllers must assess whether input and output data are used for AI training and whether an opt-out choice exists for such data use. Where the processing of input and output data for AI training is not excluded, a legal basis for the use for AI training is required (and from a data protection perspective, the authorities consider AI applications to be preferable that do not use input and output data for training purposes).
  • Check transparency and opt-out options for input history: Controllers should assess whether users are transparently informed about the storage of text input (prompts) history and should ensure respective opt-out options.
  • Ensure compliance with data subject rights: Controllers must ensure that data subjects may exercise all their rights granted under the GDPR, including the right to data rectification (Art. 16 GDPR) and data deletion (Art. 17 GDPR). For LLM tools, compliance with the right to data rectification could be ensured by the correction of data and/or fine-tuning of the AI model. The authorities further highlight that filtering technologies (while strictly speaking not generally providing for a deletion of data) can provide for a valuable contribution to avoid certain unwanted output and therefore serve the rights and freedoms of data subjects.
  • Involve DPO and works council: Controllers should involve data protection officers and any employee representatives, such as works councils.
Stage 2 (Implementation Phase): Implementing AI Applications

  • Determine and bindingly stipulate responsibilities: Controller and processor roles must be assessed in line with GDPR requirements. Where AI applications of third parties are used by a company, the third party provider usually acts as processor for the company that deploys the AI application. Multiple organisations may qualify as joint controller if an AI application is trained on basis of different datasets or developed jointly by different entities. In such cases, the responsibilities must be stipulated in accordance with Art. 26 GDPR.
  • Specify and implement internal AI policies: Companies should issue clear instructions and implement AI policies specifying the permitted use of AI applications. In some cases, works council agreements or other agreements may be beneficial or even required by law.
  • Perform a DPIA: Before using an AI application, controllers should carry out a risk analysis, and perform a DPIA in case of a high risk for data subjects under Art. 35 GDPR (which the authorities generally expect to be the case).
  • Protect employees: Employers should provide employees with corporate devices and company-accounts prior to the professional use of AI applications.
  • Perform employee trainings: Employees should be made aware through training, guidelines and discussions as to whether and how they should and may use AI applications.
  • Implement TOMs in line with privacy by design and by default: Controllers must implement technical and organisational measures to ensure compliance with the principles of privacy by design and by default, e.g. by activating opt-out for data use for AI training by default, or disabling the publication of output data.
  • Ensure data security: Controllers must ensure confidentiality, integrity, availability and resilience of all data processing systems. The DSK specially refers to further information provided by the German Federal Office on Information Security.
  • Monitor ongoing developments: Controllers need to monitor the technical and legal developments in the AI field, including whether they need to comply with additional requirements under the AI Act (for an impact analysis of the requirements under the AI Act see our recent publications (Part1) and (Part 2).
Stage 3 (Production Phase): Using AI Applications

  • Ensure lawful use of data input and output: Where input data include personal data, the data subjects affected must be informed about such use of their personal data. Also, any use of input and output data that includes personal data must be covered by a lawful basis. It must be considered that personal data may be inferred indirectly from input and output data, which would trigger the applicability of the GDPR.
  • Limit use of sensitive data: Controllers must honor the limitations for the processing of special categories of personal data under Art. 9 GDPR, and assess whether an exception for the permitted processing of such data under Art. 9 (2) GDPR applies.
  • Check accuracy of output: Any results generated by AI applications must be critically checked for accuracy, as many LLM-providers clarify that generated outputs may be incorrect. This applies where controllers want to transmit or otherwise process generated output for other purposes.
  • Check output and procedures for discrimination: Generated output should also be reviewed as to whether it may lead to unlawful processing, especially through discriminatory effects. For instance, data processing may be unlawful if it is intended to violate the Germany General Equal Treatment Act (AGG).

Take Aways

Although the guidance of the German DPAs does not contain surprising new findings, it provides some insights on the expectations and current position that the German DPAs take with respect to the use of AI tools by companies and other organisations. The position of the German DPAs is essentially in line with the views expressed by other European DPAs, such as reflected in the French CNIL’s “AI how-to sheets”. Companies should therefore consider this recent guidance – as one factor within their general AI strategy and governance program – when deploying AI systems within their organisations. Still, several of the points addressed in the guidance are still subject to an ongoing, vivid legal discussion, and may allow for diverging positions. As the German DPAs also indicate, the recent guidance is intended to be updated in the future to account for new developments in the field of AI.

For further information, reach out to our global team who is happy to help you navigating regulatory challenges when developing and deploying AI systems. For the steps that companies need to take for building an appropriate AI governance program, in particular in view of the future AI Act, see our latest publication here.

 

Authored by Henrik Hanssen, Anna Vogel, and Martin Pflüger.

Add a Comment

Your email address will not be published. Required fields are marked *