AI Meets GDPR: Building Bridges Between Innovation and Compliance

AI Meets GDPR: Building Bridges Between Innovation and Compliance

In December 2024, the European Data Protection Board (EDPB) published Opinion 28/2024, offering critical insights into the bridge between personal data and artificial intelligence (AI) models. This was in response to a request by the Data Protection Commission under the EU General Data Protection Regulation (GDPR) consistency mechanism, aiming for harmonised regulatory practices across the EU.

The Opinion addresses three key areas for AI model development and deployment:

  1. Anonymity in AI Models Determining anonymity requires a nuanced, case-by-case approach. Two main conditions are emphasised:
    • Personal data from training datasets must not be extractable from the model.
    • Outputs from the model must not identify or relate to the data subjects whose personal data was used during training. Relying on the 2014 guidance on anonymisation techniques, the EDPB underscores the importance of evaluating identification risks and ensuring “reasonable means” are not likely to identify individuals.
  2. Legitimate Interest as a Legal Basis The Opinion reiterates GDPR principles like accountability, transparency, and data minimisation. When relying on “legitimate interest,” the three-step test—lawfulness, necessity, and balancing rights—must be satisfied. Specifically:
    • The data processing must be lawful, clearly articulated, and driven by real, immediate interest.
    • Necessity entails minimising data volume and categories.
    • The balancing test ensures data subject rights aren’t disproportionately impacted.
  3. Impact of Unlawful Data Processing When personal data is unlawfully processed during AI development, the consequences extend to its deployment. Scenarios highlighted include:
    • Same controller managing both development and deployment.
    • Different controllers at deployment stage, requiring robust assessments.
    • Anonymisation of the model precludes GDPR applicability, provided personal data is no longer processed.

How do I protect my business?

As with most regulatory practices, protection is to be found at first level by examining the processes and procedures that were in place at the time of the alleged breach.

Anonymity Assessment Protocols: Establish strict evaluation processes to verify whether AI models meet anonymity requirements. Implement tools and processes to ensure personal data cannot be extracted from training datasets and outputs are free from identifiable information.

Legitimate Interest Assessment Framework: Develop a framework based on the GDPR’s three-step test. Ensure processing is lawful, necessary, and rights-protective by minimising data categories and volumes.

Unlawful Data Processing Checks: Implement rigorous case-by-case analysis to prevent using unlawfully processed personal data in AI model development (e.g by inputting datasets with pseudo anonymisation or masking the data).

Documentation and Training: Maintain records and provide staff training on GDPR compliance, emphasising risk evaluation.

For further information contact : rdevonport@reddycharlton.ie