TL;DR:
- Stanford study shows that major AI models, including Google’s PaLM 2 and OpenAI’s GPT-4, do not comply with the EU AI Act.
- EU has been working on comprehensive regulations for AI technologies, culminating in the AI Act.
- The study highlights 12 requirements for foundational model providers and evaluates compliance.
- Significant disparities in compliance levels among providers, with some scoring below 25%.
- Lack of transparency in areas such as disclosing copyrighted training data and energy usage.
- Open AI model providers demonstrate more comprehensive resource disclosure compared to closed providers.
- None of the studied foundation models fully comply with the current regulations of the AI Act.
- Executives from 150 prominent companies express concerns over the potential impact on Europe’s competitiveness.
- Compliance challenges may prompt companies to consider leaving the EU and affect AI development.
- Enhanced collaboration between policymakers and model providers is necessary to address gaps and challenges.
Main AI News:
A recent study conducted by Stanford University has shed light on the non-compliance of prominent AI models, including Google’s PaLM 2 and OpenAI’s GPT-4, with the European Union’s AI Act. The findings have raised concerns about the ability of leading tech companies to meet the requirements set forth by the upcoming legislation.
Over the past two years, the EU has been diligently working towards establishing comprehensive regulations to govern AI technologies, resulting in the development of the AI Act. The Act recently underwent a vote in the European Parliament, gaining overwhelming support with 499 votes in favor, 28 against, and 93 abstentions.
The legislation aims to impose explicit obligations on foundational model providers, such as OpenAI and Google, with the goal of regulating AI usage and mitigating the risks associated with this new technology. However, as AI systems continue to evolve rapidly, lawmakers are struggling to keep pace with the advancements, leading to a heightened need for regulation.
Stanford University’s Center for Research on Foundation Models (CRFM) conducted a study focusing on the European Parliament’s version of the AI Act. The researchers selected 12 requirements out of the 22 directed at foundation model providers, which could be evaluated using publicly available information. These requirements were categorized into data resources, compute resources, the model itself, and deployment practices.
To assess compliance, the researchers devised a 5-point rubric for each of the 12 requirements. They evaluated 10 major model providers, including OpenAI, Google, Meta, and Stability.ai, assigning scores ranging from 0 to 4 based on adherence to the outlined requirements. The study uncovered significant disparities in compliance levels, with some providers scoring below 25 percent. Moreover, it revealed a lack of transparency among model providers, with several areas of non-compliance identified.
Notably, model providers have failed to disclose the status of copyrighted training data, a crucial factor in shaping new copyright laws tailored to AI-generated content. Additionally, most providers have undisclosed energy usage and emissions during model training and lack transparent methodologies to mitigate potential risks, both of which are key aspects of the AI Act.
The study also highlighted discrepancies between open and closed AI model providers. Open releases, such as Meta’s LLaMA, demonstrated more comprehensive disclosure of resources compared to restricted or closed releases like OpenAI’s GPT-4.
The analysis of the studied foundation models revealed that none of them fully comply with the current regulations outlined in the AI Act draft. Although the study acknowledges room for improvement, it emphasizes that the high-level obligations established in the AI Act present challenges for many companies.
In response to the proposed regulations, executives from 150 prominent companies, including Siemens, Renault, and Heineken, expressed concerns in an open letter addressed to the European Commission, the parliament, and member states. They believe that the draft legislation will jeopardize Europe’s competitiveness and technological sovereignty without effectively addressing the challenges faced by the industry.
These executives warn that the stringent rules will place heavy burdens on foundation models, potentially driving companies to consider leaving the EU and causing investors to withdraw their support for AI development in Europe. Such outcomes could result in the EU falling behind the United States in the global AI development race.
Conclusion:
The non-compliance of major AI models with the EU AI Act, as revealed in the Stanford study, signifies significant challenges for the market. Leading tech companies will need to align themselves with the requirements of the upcoming legislation to avoid potential repercussions. The lack of transparency and discrepancies in compliance levels raise concerns about the effectiveness of regulations in mitigating risks associated with AI. Enhanced collaboration between policymakers and model providers is crucial to bridge gaps and ensure the appropriate implementation of the AI Act, fostering a competitive and technologically sovereign market environment.