I am a counsel in our Technology & Communications Sector Group. I provide pragmatic and solution-driven advice to our clients on all issues around data and information technology law, with a strong focus on and experience with AI and machine learning projects.
On 6 May 2024, the German Data Protection Conference (“DSK”), the collective body of all 17 data protection authorities (“DPA’s”) in Germany, published its long-awaited guidance on GenAI and data protection (hereafter “DSK GenAI guidance”, available in German here). These guidelines are the first comprehensive recommendations by German DPA’s specifically for generative AI ("GenAI"), following earlier guidelines issued by the Bavarian and Baden-Württemberg state data protection authorities for their respective regions.
Unfortunately, the DSK's GenAI guidance provides limited support to organisations using GenAI in the German market. This assessment is supported by the following considerations:
Scope of the DSK GenAI guidance
The new DSK GenAI guidance focuses on the GDPR compliant selection, implementation and use of GenAI tools by organisations, thereby excluding provider-related issues concerning the development or training of GenAI. The aim, which must be endorsed, is to assist organisations in navigating complex AI topics and facilitate the adoption of AI deployment.
Context of the DSK GenAI guidance
In a broader context, privacy regulators are positioning themselves to maintain their leading role in the ongoing debate on AI regulation in Germany and the EU. Currently, in the absence of the applicable AI Act, privacy regulators across the EU are already assuming the role of de facto AI regulators. Several EU DPAs have even asserted their status as national regulators responsible for enforcing the AI Act, which in Germany, for example, has been actively supported by the lobbying efforts of some German DPA’s. Moreover, on 7 May 2024, the DSK also published a position paper outlining the national competences required for the AI Act. In this paper, the DSK argues that the German DPA’s should be designated as the market surveillance authorities for AI systems in Germany, based on their tasks and expertise (see German source). The publication of AI guidelines appears to be an effective means of demonstrating this expertise and supporting the DPA’s claims.
Nevertheless, it is worth noting that the DSK's entry into the GenAI discourse is relatively late, given its significant international influence, compared to other influential DPAs such as the French CNIL, the UK ICO (in relation to the UK GDPR), the Spanish AEPD or the Austrian DSB. These DPAs have already issued AI-related guidance, some of which has been available for several months.
Overall impression of the DSK GenAI guidance
The content of the DSK GenAI guidance appears to be a high-level summary that is largely self-explanatory for everyone with a basic understanding of the GDPR (e.g. with regard to the need for a legal basis, the requirements for automated decision-making or transparency).
It is therefore disappointing that the DSK GenAI guidance provides little practical guidance, particularly on complex issues and potential inconsistencies arising from the application of the GDPR to AI systems (e.g. whether AI models themselves should be classified as personal data, or the exercise of data subjects' rights; details below).
Little practical guidance for organisations in the German market
While the DSK GenAI guidance touches on important and complicated issues that require recommendations to overcome genuine barriers, it only deals with these issues in a cursory manner. Regrettably, it falls short of providing practical guidance to effectively aligning the use of GenAI with the relevant requirements of the GDPR.
This includes the following:
The question of whether an AI model itself must be classified as personal data, even if no personal data is processed in the specific use case, is controversial. Such a categorisation would lead to the applicability of the GDPR to any use of the AI model, which the DSK suggests in sec. 1.3. Apart from the fact that such an assessment will place a significant burden on the controller, it is not specified how exactly this assessment should be carried out, let alone how such a categorisation can be avoided to reliably exclude the applicability of the GDPR in use cases that do not rely on personal data.
The DSK asserts a preference for closed AI platforms over open ones, citing concerns about increased risk exposure associated with the latter's openness (sec. 1.7). However, it is difficult to understand the underlying rationale, given that potential risks are inherently dependent on specific implementations and the internal governance mechanisms employed, such as policies. The nature of the platform, whether open or closed, does not appear to be a determining factor in this context.
With regard to the rights of data subjects, the DSK places particular emphasis on the rights of rectification and deletion. The DSK GenAI guidance suggests that rectification can be achieved through follow-up training and fine-tuning, while deletion can be achieved through data filtering (sec. 1.11). However, it is clear that the practical implementation of these measures by the provider is unlikely to be feasible given the cost and effort involved. Consequently, this approach is of limited assistance to controllers. In contrast, the Austrian DSB has taken a more reasonable approach, emphasising transparency as a means of informing data subjects of the potential for LLMs to generate inaccurate personal data (known as ‘hallucination’), which may impact data subjects’ rights.
The highly relevant question of joint control is touched at a high-level yet fails to shed light on the relevant questions. In practice, an important issue remains unresolved: whether a customer's consent to the reuse of data in the training of an AI system qualifies them as a joint controller (sec. 2.1). This issue is of considerable importance, but the DSK GenAI guidance does not provide definitive clarification.
According to the DSK GenAI guidance, a data protection impact assessment (“DPIA”) will "often" be required (sec. 2.3). However, organisations need precise clarity on the circumstances in which a DPIA is required, as not all AI systems pose high risks under the GDPR. Unfortunately, this remains unclear. The DSK GenAI guidance only refers to the blacklists issued by German state DPA’s, which outline the processing activities that require a DPIA. Regrettably, these blacklists provide no further guidance on when AI systems are considered high risk, leaving organisations uncertain as to how to accurately assess the need for a DPIA. While it may be prudent to conduct a DPIA on a voluntary basis for all AI systems, the question arises as to the permissible deviations from the rigorous DPIA process outlined in the GDPR. This would provide controllers with greater flexibility in designing a risk assessment framework tailored to AI systems which are not high-risk.
The application of AI in healthcare, in particular for the diagnosis and treatment of patients, has significant practical relevance. According to the DSK GenAI guidance, such utilisation can be justified under the healthcare services and treatment contract exemption (Art. 9(2)(h) GDPR), provided that (i) AI use aligns with the current state of the art and (ii) the AI system is recognised as a medical device (sec. 3.2). Point (i) proves to be particularly advantageous, as the use of AI systems for diagnostic or therapeutic purposes has not been recognised in the past by other German data protection authorities (e.g. the data protection authority of Baden-Württemberg) within the conventional scope of the state of the art in medicine. However, point (ii) introduces an additional requirement that appears to impose an additional burden that does not seem to fit seamlessly into the established structure of the GDPR. Unfortunately, the underlying rationale for this requirement remains unexplained in the DSK GenAI guidance.
Finally, the DSK GenAI guidance reiterates that it is paramount to avoid bias and discrimination, as such actions would run counter to the legitimate interest exemption in Article 6(1)(f) of the GDPR (sec. 3.4). However, the rationale behind the DSK's preference for the legitimate interest exemption in addressing AI discrimination, as opposed to other principles such as fairness (which is the ICO's position), remains unclear. As a result, the practical application of this approach becomes challenging as the conceptual framework remains opaque and difficult to replicate in the broader context of different GenAI systems.
Conclusion and outlook
It is desirable that future iterations of the DSK GenAI guidance provide more comprehensive and practical insights, rather than serving as a mere summary of high-level and generally uncontroversial topics with limited coverage of key issues. In the meantime, organisations operating in the German market are forced to navigate the complex AI landscape on their own. This state of affairs is unfortunate given the undeniable importance of AI governance, particularly in the context of the GDPR.
Nevertheless, organisations should be reminded of the risk-based approach advocated by the GDPR. This approach allows for a degree of discretion in the implementation of AI systems, provided that it is exercised prudently.