OpenAI Faces GDPR Scrutiny: Italian DPA Investigates ChatGPT Privacy Concerns

TL;DR:

  • Italian Data Protection Authority (DPA) investigates OpenAI for potential GDPR violations by its AI chatbot, ChatGPT.
  • OpenAI has been given 30 days to defend against the allegations, with potential fines of up to €20 million or 4% of annual turnover.
  • Concerns include a lack of a suitable legal basis for data collection and processing, ‘hallucination’ issues, and child safety.
  • OpenAI faces challenges in justifying data processing methods and relies on consent or legitimate interests as legal bases.
  • Legitimate interests may be insufficient due to potential harm and lack of data subject consent.
  • Multiple EU countries are examining ChatGPT’s GDPR compliance, including Poland.

Main AI News:

OpenAI, the pioneering AI company, finds itself under intense scrutiny as the Italian Data Protection Authority (DPA) raises concerns about potential violations of European Union privacy laws by its AI chatbot, ChatGPT. While the specific details of the Italian authority’s findings remain undisclosed, OpenAI has received notification and has been given 30 days to provide a defense against the allegations.

In the event of confirmed breaches of the pan-European General Data Protection Regulation (GDPR), hefty fines of up to €20 million or 4% of global annual turnover may be imposed. More significantly, data protection authorities (DPAs) possess the authority to issue orders mandating changes in data processing practices to rectify violations. This could compel OpenAI to modify its operations or even withdraw its services from EU Member States where privacy authorities demand alterations that it opposes.

The central issue revolves around the legality of OpenAI’s data processing methods for training its AI models, particularly ChatGPT. Last year, the Italian authority temporarily suspended ChatGPT’s local data processing due to concerns about GDPR compliance. The Garante, Italy’s DPA, highlighted the absence of a suitable legal basis for collecting and processing personal data for training the algorithms behind ChatGPT. Moreover, the AI tool’s tendency to ‘hallucinate,’ generating inaccurate information about individuals, and concerns regarding child safety have compounded the situation.

In total, the Italian authority suspects that ChatGPT may be breaching several articles of the GDPR, including Articles 5, 6, 8, 13, and 25. While OpenAI made adjustments to address some of the issues raised by the DPA, it is now facing preliminary conclusions suggesting that the tool is indeed violating EU law.

A pivotal challenge OpenAI faces in the European Union is establishing a valid legal basis for processing EU citizens’ data. The GDPR provides six possible legal bases, with consent and legitimate interests being the most relevant in this context. However, OpenAI’s position is complicated by the vast amount of personal data it has gathered from the public Internet without obtaining explicit consent. Even its attempt to rely on legitimate interests requires allowing data subjects to object and cease processing of their information—a complex and potentially expensive endeavor.

Furthermore, the Garante must determine whether legitimate interests are a valid legal basis for OpenAI’s data processing. This requires balancing the interests of data processors with the rights and freedoms of individuals, considering whether individuals would have expected such use of their data and the potential for unjustified harm. If expectations are not met, and risks of harm exist, legitimate interests may not be deemed a valid legal basis.

Notably, the EU’s top court has previously ruled against legitimate interests as a basis for extensive tracking and profiling, raising questions about its applicability in this context. OpenAI’s endeavors to justify processing vast amounts of data for commercial generative AI purposes face significant hurdles, considering the potential risks to individuals, from disinformation to identity theft.

While the Garante has not yet specified which GDPR articles OpenAI is suspected of breaching, it is not the only authority investigating ChatGPT’s GDPR compliance. Poland is also conducting a separate probe based on a complaint related to the tool producing inaccurate information and OpenAI’s response to the issue.

OpenAI has responded to the increasing regulatory risk across the EU by establishing a physical presence in Ireland, aiming to achieve “main establishment” status. This would shift GDPR compliance oversight to Ireland’s Data Protection Commission, potentially simplifying OpenAI’s regulatory landscape.

However, OpenAI’s ongoing challenges highlight the complexity of navigating the EU’s data protection landscape, with multiple DPAs independently examining the company’s practices. While efforts to coordinate oversight through the European Data Protection Board may lead to harmonized outcomes, no guarantees exist regarding the conclusions of ongoing ChatGPT investigations in various EU countries.

Conclusion:

The Italian DPA’s investigation into OpenAI’s ChatGPT highlights the growing regulatory challenges AI companies face in navigating the EU’s stringent data protection laws. The potential for substantial fines and the need to establish a valid legal basis for data processing could reshape the market by encouraging greater transparency and adherence to privacy regulations in the AI industry. Companies operating in the EU will need to prioritize GDPR compliance to avoid regulatory scrutiny and financial penalties.

Source