On December 17, 2024, the EDPB published a statement dealing with key data protection issues in the context of the development and use of AI models. The statement is aimed at experts and companies that use AI technologies and provides important clarifications and practical advice for compliance with the GDPR.
Anonymity of AI models: Stricter requirements
A key point of the statement is the clarification of when an AI model is considered anonymous. The EDPB emphasizes that models that have been trained with personal data cannot automatically be considered anonymous. For such a classification, the probability that personal data can be extracted from the model – whether intentionally or unintentionally – must be negligible. Supervisory authorities should examine this on a case-by-case basis. The EDPB recommends measures such as minimizing the data collected, limiting identifiability and using state-of-the-art protection technologies to ensure anonymity.
Legitimate interest as a legal basis: Specification of the three-stage test
The opinion provides detailed guidance on the use of “legitimate interest” as a legal basis for the processing of personal data in the development and use of AI. In addition to defining legitimate interests – they must be lawful, precisely formulated and real – the EDPB emphasizes the importance of carefully balancing the interests of data controllers and data subjects.
Compliance with the principle of data minimization is particularly important here: only the data that is absolutely necessary for the purpose may be processed.
Greater focus on the expectations of data subjects
Another key point of the statement is the consideration of the expectations of the data subjects. Not only must they be informed transparently, but they must also be able to understand the processing and the potential consequences. This is a particular challenge with complex AI technologies.
Relevant factors include the origin of the data, the relationship between the data subject and the controller and whether the data is publicly accessible. The EDPB emphasizes that the processing must not violate the legitimate expectations of the data subjects.
Consequences of unlawful data processing in the development phase
The handling of unlawfully processed data in the development phase of an AI model is considered in a differentiated manner:
- If the data is still contained in the model, any resulting subsequent processes may be unlawful.
- If the model is passed on to third parties, they must check whether the original processing was lawful.
- If the model is anonymized, the applicability of the GDPR ends. However, subsequent processing operations that again involve personal data must be assessed independently.
Risk mitigation measures
The EDPB statement provides concrete examples of protective measures that can minimize negative effects on data subjects. These include, for example, limiting data processing to what is absolutely necessary, transparent information on use and the use of modern security measures. It emphasizes that such measures must be adapted to the specific context of the AI model.
Conclusion
The opinion provides valuable insights and clarifies the application of the GDPR in the field of AI. Particularly noteworthy are the strict requirements for the anonymity of models, the detailed consideration of legitimate interests and the increased consideration of the perspective of the data subjects.