European Companies Raise Concerns Over EU’s AI Act Impact on Technological Sovereignty

TL;DR:

  • Over 150 executives from leading European companies criticize the EU’s AI Act in an open letter.
  • Concerns raised include potential negative effects on Europe’s competitiveness and technological sovereignty.
  • Companies argue that the regulations are too extreme and could hinder AI innovation.
  • Specific concerns focus on compliance costs and liability risks for providers of generative AI systems.
  • Companies call for a more flexible and risk-based approach to AI regulation.
  • They also propose the establishment of a regulatory body of AI industry experts.
  • European Parliament approved a draft version of the AI Act, with further negotiations ongoing.
  • OpenAI previously lobbied to modify the AI Act but currently has no plans to leave the European market.

Main AI News:

In a joint effort, leading European companies have expressed their criticism towards the European Union’s recently approved regulations on artificial intelligence (AI), arguing that the AI Act could jeopardize Europe’s technological sovereignty and hinder competition. Over 150 executives from prominent companies, including Renault, Heineken, Airbus, and Siemens, penned an open letter addressed to the European Parliament, Commission, and member states, emphasizing the potential negative consequences of the AI Act. Their concerns stem from the Act’s perceived ineffectiveness and its impact on Europe’s global competitiveness.

On June 14th, the European Parliament approved a draft version of the AI Act after two years of meticulous development and expansion to encompass breakthroughs in AI, such as large language AI models (LLMs) and foundation models like OpenAI’s GPT-4. Despite this milestone, several phases of negotiation still lie ahead before the law can be implemented, with expectations for completion later this year.

The signatories of the open letter argue that the current state of the AI Act may impede Europe’s opportunity to regain its position at the forefront of technological advancement. They contend that the approved rules are excessively stringent and risk undermining the region’s technological ambitions, rather than fostering an environment conducive to AI innovation. A particular concern raised by the companies pertains to the Act’s strict regulations specifically targeting generative AI systems, a subset of AI models often falling under the designation of “foundation models.” Under the AI Act, providers of foundation AI models, regardless of their intended application, will be required to register their products with the EU, undergo risk assessments, and fulfill transparency requirements. This includes publicly disclosing any copyrighted data used to train their models.

The open letter suggests that the compliance costs and liability risks imposed on companies developing foundation AI systems are disproportionate, potentially leading AI providers to withdraw entirely from the European market. The letter emphasizes that Europe cannot afford to remain on the sidelines and urges EU lawmakers to reconsider the rigid compliance obligations placed on generative AI models. Instead, they propose a focus on accommodating “broad principles in a risk-based approach.

Jeannette zu Fürstenberg, founding partner of La Famiglia VC and one of the signatories, states, “We have come to the conclusion that the EU AI Act, in its current form, has catastrophic implications for European competitiveness. There is a strong spirit of innovation that is being unlocked in Europe right now, with key European talent leaving US companies to develop technology in Europe. Regulation that unfairly burdens young, innovative companies puts this spirit of innovation in jeopardy.”

In addition, the companies call for the establishment of a regulatory body comprising AI industry experts to monitor the application of the AI Act as technology continues to advance.

Responding to the letter, Dragoș Tudorache, a Member of the European Parliament who played a significant role in the development of the AI Act, expressed disappointment that a few companies were capturing the attention while other serious companies were left unheard. Tudorache maintains that the draft EU legislation presents an industry-led process for defining standards, incorporating industry input in governance, and adopting a light regulatory regime that prioritizes transparency above all else.

OpenAI, the organization behind ChatGPT and Dall-E, previously lobbied the EU to modify an earlier draft of the AI Act in 2022. They specifically requested the removal of a proposed amendment that would have subjected all providers of general-purpose AI systems, a broad and expansive category encompassing LLMs and foundation models, to the Act’s most stringent restrictions. Ultimately, this amendment was not included in the approved legislation.

OpenAI’s CEO, Sam Altman, who signed an open letter cautioning against the potential risks of future AI systems, had previously warned that the company might withdraw from the European market if it could not comply with EU regulations. However, Altman later clarified that OpenAI currently has no plans to exit the market.

Conclusion:

The concerns expressed by European companies regarding the EU’s AI Act highlight the potential risks to Europe’s competitiveness and technological sovereignty. The regulations, as they stand, are viewed as too strict and may impede innovation in the AI sector. Compliance costs and liability risks for AI providers, particularly those developing generative AI systems, could lead to market withdrawals and hinder Europe’s progress in AI development. It is crucial for EU lawmakers to carefully consider the feedback from industry leaders and strike a balance between regulation and fostering a favorable environment for AI innovation.

Source