Big Tech companies are already attempting to weaken Europe’s regulations on AI

TL;DR:

  • European lawmakers are preparing to implement regulations for artificial intelligence use, making the EU the first major jurisdiction outside of China to regulate AI.
  • The EU Artificial Intelligence Act is expected to ban the use of AI for controversial purposes and require companies to disclose copyrighted material used for AI training.
  • The legislation is facing a key point of contention regarding the classification of General Purpose AI as high-risk and subject to strict regulations and penalties.
  • Big tech companies and a conservative bloc of politicians argue against the classification, while progressive politicians and tech experts advocate for it.
  • Over 50 institutions and AI experts have published an open letter calling for General Purpose AI not to be exempt from regulation.
  • Big tech companies like Google and Microsoft are pushing back against the regulations, arguing that General Purpose AI systems are not inherently dangerous and only become so when applied to “high-risk” use cases by smaller companies.
  • The regulations should be assigned to the user who places the General Purpose AI in a high-risk use case, rather than the developer of the system itself, according to the companies.
  • The EU AI Act was first drafted in 2021, but with the development of powerful General Purpose AI systems, the EU may face ongoing debates on how to regulate these systems.
  • Critics argue that the EU has structured the AI Act in an outdated fashion, as General Purpose AI systems don’t have an inherent use case, and it may be years before their full capabilities and limitations are known.

Main AI News:

As the global AI industry continues to grow and evolve, European lawmakers are preparing to take center stage by putting the finishing touches on a comprehensive set of regulations for artificial intelligence. If passed, these regulations will make the EU the first major jurisdiction outside of China to adopt targeted AI regulation.

The forthcoming EU Artificial Intelligence Act has been the subject of intense debate and lobbying, with stakeholders on both sides of the issue working tirelessly to shape its scope and impact. However, according to the Financial Times, lawmakers are now close to agreeing on a draft version of the legislation, which will then move on to negotiations between the EU’s member states and the executive branch.

The EU AI Act is expected to ban the use of AI for controversial purposes such as social scoring and facial recognition in public and will require companies to disclose any copyrighted material used to train their AI systems. These regulations have the potential to set a global standard for AI deployment and usage, as companies may find it easier to comply with EU rules across the board rather than tailoring their products to different regions.

Amba Kak, the executive director of the AI Now Institute, a policy research group at NYU, notes that the EU AI Act will play a major role in shaping the future of AI regulation. “The EU AI Act is definitely going to set the regulatory tone around: what does an omnibus regulation of AI look like?” says Kak. With the AI industry poised for continued growth, the EU’s forthcoming regulations are sure to play a pivotal role in shaping the future of this dynamic and rapidly evolving field.

The EU Artificial Intelligence Act is facing a key point of contention regarding the classification of General Purpose AI, such as the kind used in ChatGPT, as high-risk and subject to the strictest regulations and penalties for misuse. On the one hand, big tech companies and a conservative bloc of politicians argue that labeling General Purpose AI as high-risk would stifle innovation. On the other hand, progressive politicians and tech experts argue that exempting these powerful AI systems from regulations would be equivalent to excluding social media giants like Facebook and TikTok from social media regulations.

Those advocating for the regulation of General Purpose AI assert that only the developers of these systems have a complete understanding of their biases and potential harms and, therefore, should be held accountable for ensuring AI safety. They argue that if the responsibility for AI safety is shifted to smaller companies, the big tech companies at the heart of the AI industry will be let off the hook.

Recently, over 50 institutions and AI experts published an open letter calling for General Purpose AI not to be exempt from EU regulation. “Considering [General Purpose AI] as not high-risk would exempt the companies at the heart of the AI industry, who make important decisions about how these models are shaped, how they’ll work, and who they’ll work for, during the development and calibration process,” says Meredith Whittaker, the president of the Signal Foundation and a signatory of the letter. “This exemption from scrutiny would occur even though these General Purpose AIs are core to their business models.” As the debate continues, the classification of General Purpose AI in the EU AI Act will likely have far-reaching implications for the future of AI regulation and development.

Big Tech companies like Google and Microsoft, who have invested heavily in AI, are pushing back against the proposed EU Artificial Intelligence Act, according to a report by the Corporate Europe Observatory. The report states that these companies argue that General Purpose AI systems, such as ChatGPT, are not inherently dangerous and only become so when applied to “high-risk” use cases by smaller companies.

Google has submitted a document to EU commissioners stating that “General-purpose AI systems are purpose neutral” and categorizing them as high-risk could harm consumers and stifle innovation in Europe. Microsoft has similarly argued through industry groups that it is a member of, stating that the AI Act does not need a specific section on General Purpose AI and that it is not possible for providers of GPAI software to anticipate all AI solutions built based on their software. Microsoft has also advocated against the regulations “unduly burdening innovation” through The Software Alliance, an industry lobby group that it founded.

The position of both companies is that the regulations should be assigned to the user who places the General Purpose AI in a high-risk use case rather than the developer of the system itself. As the EU continues to forge ahead with its AI regulations, these debates and arguments from industry leaders are sure to play a major role in shaping the future of AI regulation in Europe and beyond.

The EU Artificial Intelligence Act, first drafted in 2021, was created at a time when AI was limited to narrow use cases. However, in the past two years, Big Tech companies have successfully developed and launched powerful General Purpose AI systems, such as OpenAI’s GPT-4 and Google’s LaMDA, that can perform a wide range of tasks, both harmless and high-risk.

Under the current business model, these big tech companies license their General Purpose AI systems to other businesses, who then adapt them for specific tasks and make them accessible through an app or interface. Critics argue that the EU has placed itself in a difficult position by structuring the AI Act based on outdated risk categories for different uses of AI.

As Helen Toner, a member of OpenAI’s board and the director of a strategy at Georgetown’s Center for Security and Emerging Technology, explains, “The problem is that large language models – General Purpose models – don’t have an inherent use case which is a big shift in how AI works. Once these models are trained, they’re not trained to do one specific thing, and even the people who create them don’t actually know what they can and can’t do.

Toner adds that it may be years before the full capabilities and limitations of General Purpose AI systems are known, which presents a challenge for legislation structured around categorizing AI systems based on their use case. As the AI industry continues to evolve, the EU will likely face ongoing debates and discussions around the regulation of these powerful AI systems.

Conlcusion:

The European Union is preparing to implement comprehensive regulations for artificial intelligence use, which could set a global standard for AI deployment and usage. The EU Artificial Intelligence Act has been the subject of intense debate and lobbying, with big tech companies and a conservative bloc of politicians pushing back against regulations that classify General Purpose AI systems as high-risk and subject to strict regulations and penalties.

The classification of General Purpose AI is a key point of contention and will likely have far-reaching implications for the future of AI regulation and development. The EU’s current AI regulations are based on outdated risk categories for different uses of AI, and as the AI industry continues to evolve, the EU may face ongoing debates and discussions around the regulation of these powerful AI systems.

Source