Meta Faces Legal Blitz in EU Over AI Training Plans Without User Consent
CyberInsider covers the latest news in the cybersecurity and data privacy world. In addition to news, we also publish in-depth guides and resources. See our Mission >
Legal Showdown in the European Union
Meta is facing a major legal showdown in the European Union after privacy advocacy group noyb issued a formal cease and desist letter, warning of imminent litigation over the company’s controversial plan to use Facebook and Instagram user data for AI training starting May 27.
The group, led by privacy activist Max Schrems, argues that Meta's reliance on “legitimate interest” as a legal basis sidesteps fundamental GDPR requirements, opening the door to collective redress actions that could cost the tech giant billions.

The company’s plan, first reported last month, involves collecting public posts, comments, and interactions with Meta AI chat features from EU-based users to train generative AI models. While Meta has promised not to use private messages or data from users under 18, its opt-out mechanism and reliance on implicit consent have drawn criticism from privacy experts and digital rights advocates.
Implications of the Cease and Desist Letter
The letter marks the first formal step toward what could become one of the largest pan-European data protection cases under the EU’s new Collective Redress Directive. This legislation empowers qualified entities like noyb to seek injunctions and damages across EU jurisdictions on behalf of consumers.

Schrems’ organization is demanding that Meta immediately halt the ingestion of EU user data into its AI training pipelines, warning that failure to comply by May 21 will trigger legal action.
Challenges to Meta's Interpretation of GDPR
At the heart of the dispute is Meta’s decision to bypass opt-in consent, instead invoking Article 6(1)(f) GDPR, a “legitimate interest” clause, to justify harvesting public posts, comments, and AI interactions from users across the EU. Meta has excluded data from minors and private messaging content, but intends to include all other adult user-generated public content.
Notifications offering an opt-out are being sent to users, but privacy groups argue that this flips the GDPR’s consent model on its head. noyb’s letter lays out 11 legal arguments challenging Meta’s interpretation of the GDPR.
Concerns Raised by Privacy Advocates
noyb argues that once user data is ingested into open-source AI models like Meta’s LLaMA, it becomes impossible to honor GDPR rights such as the right to erasure, correction, or access. This, they contend, makes any reliance on legitimate interest legally untenable.
Compliance with EU Regulations
Complicating matters further, noyb raises concerns about compliance with the EU’s Digital Markets Act (DMA), which bars gatekeepers like Meta from cross-using personal data across services without consent. Meta’s intent to train a general-purpose AI system using data pooled from Facebook and Instagram, according to noyb, may violate these provisions as well.

Implications and Next Steps
Since AI models cannot be easily rolled back, especially once released as open source, the legal and financial implications are enormous. Schrems estimates that non-material damages alone could total €200 billion if every one of Meta’s 400 million EU users claimed just €500.
National consumer groups, such as Germany’s VZ NRW, have already begun pursuing legal action independently, indicating a growing EU-wide backlash. Data Protection Authorities (DPAs), however, appear largely passive, with some merely informing users of their right to opt out, rather than enforcing stricter measures against Meta’s practices.
EU users should immediately review Meta’s opt-out notification and submit objections if they do not wish their data to be used for AI training. For those without active accounts but with prior data on the platforms, options are more limited, but noyb has published guidance on how such users can assert their rights.