For those of you who’d like to digest this in podcast form, I’ve experimented with a Notebook LLM version (usual caveats apply about accuracy for an AI driven version of the blog).
As we work out the final days of 2024 I’ve pulled together a comprehensive overview of global developments related to legitimate interests (LI). 2024 has seen this legal condition for personal data come an unprecedented amount of debate, particularly in the context of AI. Next year, we can expect a high level of scrutiny from data protection authorities of how organisations are demonstrating their reliance is valid.
There will be a need for organisations to review their policy and process for legitimate interest impact assessments (LIAs). The three part test for LI (purpose-necessity-balancing) will remain critical to demonstrating it can be a valid lawful basis under data protection law, this will include how benefits, risk and mitigations are plausibly explained and evidenced.
How are your balancing skills shaping up for 2025?
A 2021 report from the think thank, the Centre for Information Policy Leadership (CIPL, ‘How the “Legitimate Interests” Ground for Processing Enables Responsible Data Use and Innovation’, also addressed the growing importance of the ground. This included case studies on how organisations currently rely on the legitimate interests legal basis for both (i) routine data processing activities, and (ii) more complex, unique, or new data processing activities that are key for innovation.
EU approach to LI
In the EU we’ve had a CJEU judgment on whether a commercial interest can be a LI, new guidance from the European Data Protection Board, guidance from EU DPAs on AI models and LI, a 320M euro fine for LinkedIn for unlawful reliance on LI in the context of behavioural advertising and the social media platforms X and Meta have paused using social media data for AI training. We’ve also had a range of decisions from EU DPAs in GDPR cases involving LI and AI, including in the context of banking and news media.
CJEU opines on whether a commercial interest as a legitimate interest
In October 2024 the CJEU produced its answer in the Koninklijke Nederlandse Lawn Tennisbond v Autoriteit Persoonsgegevens. [C-621/22] case on legitimate interests under Article 6(1)(f) of the GDPR.
The judgment made clear that the Dutch DPA ( Autoriteit Persoonsgegevens) was wrong to hold that legitimate interests are only interests that are enshrined in and determined by law. According to the Dutch DPA, the interests in question must be regarded as worthy of protection by the EU legislature or by the national legislature, and must be assessed according to a ‘positive criterion’. Therefore ruling out commercial interests/
The CJEU reaffirmed previous case law, that a wide range of interests are, in principle, capable of being regarded as legitimate. The CJEU then found that “a commercial interest of the controller…. could constitute a legitimate interest, within the meaning of point (f) of the first subparagraph of Article 6(1) of the GDPR, provided that it is not contrary to the law.”
The case also reaffirmed the now well established three part test for legitimate interests “first, the pursuit of a legitimate interest by the data controller or by a third party; second, the need to process personal data for the purposes of the legitimate interests pursued; and, third, that the interests or fundamental freedoms and rights of the person concerned by the data protection do not take precedence over the legitimate interest of the controller or of a third party”.
The case has now been remitted to the referring court in the Netherlands and the substantive elements of the full test for LI will need to be considered there. The CJEU noted that the sharing of personal data between the Lawn Tennis Federation and providers of games of chance and casino games would need to be considered against the reasonable expectations of the data subjects and any risks of harm that could be caused.
The outcome in this case was not unexpected as the Dutch DPA had been out in a limb compared to other EU DPAs and in 2020 the European Commission wrote to the Dutch DPA setting out its disagreement with the guidelines containing the position on LI. However, the case had added to uncertainty about how to approach legitimate interests and the CJEU’s position provides welcome clarification.
EDPB puts out LI guidelines for consultation
This year has also seen the EDPB put out guidelines for consultation on LI (closed on 20 November). While the guidelines did not contain any major surprises and closely followed the caselaw of the CJEU and the three-part test, there are a number of important aspects to note.
There has been some criticism of the stance taken by EU DPAs on lawful basis, that there is implied preference for consent over LI, which may be seen by some DPAs as a weaker basis and is easily exploited by controllers. The EDPB opens with the following statement:
Article 6(1)(f) GDPR should neither be treated as a “last resort” for rare or unexpected situations where other legal bases are deemed not to apply nor should it be automatically chosen or its use unduly extended on the basis of a perception that Article 6(1)(f) GDPR is less constraining than other legal bases.
I’ve pulled out the following key points from the guidance:
A legitimate interest must be real and present, and not speculative. It must be present and effective at the date of the data processing and must not be hypothetical at that date.
The guidance discourages controllers from relying wider community or public interests, making clear their view that: “the interests of the wider community are mainly subject to the justifications provided for in Article 6(1)(e) or (c), if controllers are tasked or required by law to preserve or pursue such interests”.
The guidelines reiterate the test of “strict necessity” introduced by the CJEU in several cases, including Meta vs Bundeskartellamt Case C-252/2. Thus creating a higher bar to be met (even though the term is not used on the face of the GDPR and is used by reference to an example in recital 47 of the GDPR).
While not new, the guidelines cement the pivotal importance of considering the reasonable expectations of the data subject as part of the balancing test. Reasonable expectations do not necessarily depend on the information provided to data subjects and the wider context of the relationship between the data subject must be considered.
Under the balancing test, mitigating measures cannot consist of measures meant to ensure compliance with the GDPR e.g. security. Introducing additional safeguards above and beyond the safeguards required under the GDPR may be seen as a mitigating measure e.g. offering a right to erasure even if it does not legally comply. This is likely to create a challenge for some controllers when developing their LIA.
The EDPB has consistently pushed back against the use of LI in the context of behavioural advertising on social media platforms (see the 2020 guidelines on targeting of social media users). In the case Meta vs Bundeskartellamt Case C-252/21 CJEU found that Meta could not rely on Article 6(1)(b) of the GDPR ‘necessary for the performance of a contract’ and also found that the interests and fundamental rights of users override the interest of Meta in personalised advertising. The guidelines therefore reflect this approach.
The EDPB guidelines were largely silent on the issues of AI, presumably due to the separate Opinion on GDPR and AI Models, which was published on 19 December. The Opinion makes clear that LI can be a valid basis if the criterion for its application are met. The approach set out indicates the importance of a rigorous LIA and that the bar is set high for reliance on LI. The Opinion reflected the key approach from the previously published LI guidelines and made the these additional points about LI and AI:
Purpose test under LI - even where the purposes may not be fully clear at development stage of an AI model there will be a need to provide a level of detail about the type of AI model, expected functionalities and context of deployment.
These examples may constitute a legitimate interest in the context of AI models
(i) developing the service of a conversational agent to assist users;
(ii) developing an AI system to detect fraudulent content or behaviour;
and (iii) improving threat detection in an information system.
DPAs should pay particular attention to the amount of personal data processed and whether it is proportionate to pursue the legitimate interest at stake, also in light of the data minimisation principle. The Opinion also recognises the role that safeguards can play in the necessity test.
There will be different considerations whether there are first and third party uses of personal data and how this shapes the necessity considerations.
The Opinion provides extensive analysis of possible risks the rights and freedoms of data subjects and the impacts and harms, which will have to be considered in the circumstances of each LI balancing test. There is less detail in the Opinion about how to consider and evidence benefits, and the role of GDPR Recital 4 and the full range of Charter Rights, including the right to run a business.
The Opinion provides a useful overview mitigation measures that can be relevant to the balancing test, for data collection and the outputs of an AI model.
EU DPAs - precedent on AI and LI emerges from guidance, regulatory decisions and enforcement
The French DPA, the CNIL recognised in their the draft ‘How-To Sheet’ on AI that LI may be the most appropriate legal basis to develop an AI system - “it is often difficult to obtain the consent of individuals at a large scale or when personal data are collected indirectly.” This guidance also usefully sets out the CNIL’s thinking on how controllers should demonstrate their approach to an LI, including examples of benefits and impacts, and also how measures should be considered to mitigate risks. There is also a further sheet on Open Source Models.
The Italian DPA, the Garante, has sent an important message about how it sees the risks of using LI as a lawful basis to licence news content for Generative AI model development and output enhancement. In November the Garante issued a GDPR warning under Art. 58(2)(a) to a leading news media company, Gedi, related to their news content licencing deal with OpenAI. The warning made clear that the planned processing was likely to infringe the GDPR and enforcement action could follow. The warning set out concerns related to reliance on LI and associated issues related to transparency.
In Belgium, the Data Protection Authority released a decision related to the use of banking transaction data to train an AI model that would provide personalised discount recommendations. The DPA found that LI was a valid lawful basis for the processing activity - the Bank had a legitimate interest in using personal data for building such a model and it was necessary to build such a model to offer personalised discounts. The balancing part of the LI test was satisfied as the DPA found that such processing would be within the reasonable expectations of the customer, identifiers were removed, no data sharing took place or involved special category data, and there was no clear impact on people. Customers were also able to exercise their right to object. The complainant also had the ability to refuse consent to the receive the personalised information. The decision has not been without its critics (see here from Ian Brown) but I believe it does demonstrate a balanced and effective application of GDPR to AI model development, allowing innovation and exercise of data subject rights.
In Ireland the Data Protection Commission fined LinkedIn 310 million Euro for a GDPR breach related to uses of personal data related to behavioural analysis and targeted advertising of users who have created LinkedIn profiles. The central finding was that LinkedIn did not have valid GDPR lawful basis and failed to satisfy legitimate interests, in relation to first and third party uses of personal data. Speaking at the IAPP Conference Commissioner Dale Sunderland explained that LinkedIn’s LI assessment had failed on the balancing part of the test, given the reasonable expectations of users and level of intrusion from the profiling (we still await publication of the full decision).
UK approach to legitimate interests
As time post Brexit rolls on we are also starting to see some differences in approach to LI between in the EU and UK. While these difference are emerging there is a strong message from both sides on the important of a rigorous assessment of LI and meeting all three elements of the three part test.
Comparing the UK Information Commissioner’s Office (ICO) LI guidance with the EDPB approach provides some significant areas of difference:
On public interest as a LI the ICO takes a more expansive view to the EDPB: “The legitimate interests of the public in general may also play a part when deciding whether the legitimate interests in the processing override the individual’s interests and rights. If the processing has a wider public interest for society at large, then this may add weight to your interests when balancing these against those of the individual”.
The EDPB (and CJEU) approach to necessity is also different to the ICO guidance. The EU approach stresses a test of strict necessity, drawn from the GDPR recitals. The ICO guidance uses this test "This doesn’t mean that it has to be absolutely essential, but it must be a targeted and proportionate way of achieving your purpose." Previously, under the old 1998 DPA (and 95/46 EU Directive) the UK Supreme Court in South Lanarkshire Council (Appellant) v The Scottish Information Commissioner (Respondent) [2013] UKSC 55 had found ‘necessary’ meant ‘reasonably’ rather than ‘absolutely or strictly necessary’ in a Freedom of Information case, where the personal data exemption had been under consideration.
The ICO issued a consultation on lawful basis and web scraping for generative AI model development earlier in 2024. The ICO responded to the consultation evidence in December and made clear that the only viable lawful basis for using web scraped personal data in this context would be LI. The response reiterated the importance of the following:
Transparency: Web scraping for generative AI training is a high-risk, invisible processing activity. Where insufficient transparency measures contribute to people being unable to exercise their rights, generative AI developers are likely to struggle to pass the balancing test.
Necessity test: Controllers must evidence why other available methods for data collection are not suitable for Generative AI model development, such as direct data collection from people or licencing (the ICO focus on licencing is interesting given the warning issued by the Garante - the contrast usefully highlighted on LinkedIn by Prof. Christakis).
Purpose test:
Developers should evidence likely societal benefits rather than assume them.
Organisations can use interests that are generic, trivial or controversial. However, if they do, they are less likely to pass the balancing test or override someone’s right to object
Balancing test:
Generative developers will need to demonstrate how the requirements set in licences or terms of use are effective in practice. This appears to indicate that the ICO expects a degree of proactive oversight in how the requirements are followed in practice.
Generative AI model developers are using personal data, they should assess the financial impact on people in the balancing test. For example, a fashion model could lose their income if a generative AI model uses their personal data to create a digital version of them (such as an avatar) to replace them in a fashion show.
The ICO also engaged with Meta about their reliance on LI as a lawful basis for their use of social media data in training AI models. In June, Meta paused its plans in response to a request from the ICO. It then made changes to its approach, including making it simpler for users to object to the processing and providing them with a longer window to do so. While the ICO made clear it was not approving the processing it appeared to have signalled to Meta that the issue would be monitored given the changes and no enforcement action would be taken. Meta have since commenced the process, in contrast to the EU where it remains paused. Meta also provided some further information in their own statement, including that the accounts from users under the age of 18 would not be used.
We also have the Data Use and Access (DUA) Bill currently before the UK Parliament, which will allow the use of “recognised legitimate interests” (see Clause 70) for areas such processing necessary for national security or emergencies. This would mean that no balancing test would be required
The DUA Bill also takes the examples of processing that may be necessary to meet an LI from GDPR recital 47 and places them on the face of the UKGDPR, including direct marketing. This would still require the balancing test on a case by case basis, but trade bodies such as the Data and Marketing Association hope it will enable greater industry confidence to use LI where consent not required by the UK’s ePrivacy law, PECR. For example, the use of LI for postal marketing or activities such as data cleansing or matching on databases.
In context of using biometric technologies in the workplace, the ICO’s enforcement notice to leisure centre operator Serco also demonstrated the risk of commencing processing with an ineffective LIA in place.
The IAF and DPN publish new guidance
We’ve also seen a new publication from the Information Accountability Foundation (IAF). The IAF is an independent, non-profit think tank dedicated to promoting data accountability by design and advancing responsible AI governance.
Their report on LI for AI World (which I helped author) was the culmination of a year long process that involved several stakeholder workshops involving DPAs, business, industry bodies and civil society. The project sought to address a gap:
“with respect to processes that business might use to demonstrate this lawful basis, what regulators might expect, and in turn, regulators’ perceptions of business capability to meet these requirements. This gap has led to an environment where trust and confidence in legitimate interests from both sides is low.”
The model contained within the report provides a comprehensive framework for businesses to use to demonstrate their reliance, in particular the balancing test. The model provided adds significant value alongside the regulator guidance that already exists, as it provides an approach that draws together multi-dimensional stakeholder, interests, rights, freedoms, and risks balancing assessment. Alongside LI the report recognises that balancing is also required in other legislation beyond the GDPR, such as US State Laws e.g Colorado.
The UK based Data Protection Network also refreshed its guidance on LI, adding new content to cover AI and also referenced some of the learnings from the IAF project. The DPN guidance contains examples of where LI may apply and worked case studies. I also helped support the update of the guidance here as well.
Asia - the need for reforms to introduce LI
Meanwhile, many countries, particularly in Asia, lack a legitimate interests condition in their data protection laws and the case for law reform in these jurisdictions may build, as the challenge of using consent in context of AI becomes clearer. For example Japan’s Act on the Protection of Personal Information (APPI) does not contain a legitimate interests condition as an alternative to consent.
In Singapore, the general Legitimate Interests Exception was introduced to the PDPA as part of a number of amendments to the PDPA in November 2020 that came into effect on February 1 2021. The Personal Data Protection Commission has also released a standalone Assessment Checklist for Legitimate Interests.
Brazil
Lastly, in 2024 Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados (ANPD), issued guidance on the use of legitimate interests and also investigated Meta’s AI training approach, taking a similar approach to the UK ICO in pushing for further safeguards before the processing went ahead, though in this case via a series of formal decisions.
In conclusion
This blog illustrates a frenetic range of developments on legitimate interests in 2024, probably the most significant developments since GDPR came into force. DPAs are leaving the possibility of using LI in the context of AI open but the need for rigorous assessments under the three part test is clear and DPAs are likely issue further enforcement decisions next year, showing where the boundaries lie.
Final plug
If you are interested in working with PrivacyX Consulting on your LI policy and process in 2025, including how to consider differing regulatory risks between the UK and EU, please drop me a line for an initial discussion.