Biden administration to require tech giants to inform government about significant AI projects

TL;DR:

  • The Biden administration is set to enforce regulations requiring tech giants like OpenAI, Google, and Amazon to inform the US government when they train large AI models with substantial computing power.
  • Companies must share details of safety testing on their AI projects, offering the government insights into sensitive endeavors.
  • These measures stem from a White House executive order aimed at enhancing transparency and oversight in AI development.
  • Specific computational thresholds have been set, with potential repercussions for companies that surpass them.
  • Cloud computing providers like Amazon, Microsoft, and Google must also report foreign companies using their resources for AI training.
  • Experts argue that these reporting requirements are necessary amid rapid AI advancements, emphasizing the need for comprehensive AI regulation.
  • The National Institutes of Standards and Technology (NIST) is working on AI safety standards and guidelines for companies.

Main AI News:

Biden administration is poised to enact a new regulation that mandates tech giants like OpenAI, Google, and Amazon to notify the US government about their ventures into advanced AI projects involving large language models. This move comes in the wake of OpenAI’s ChatGPT, which made waves in the tech world last year, taking even Silicon Valley and Washington, DC, by surprise.

Under the proposed regulation, tech companies will be compelled to disclose when they train a substantial AI model employing significant computing power, thereby granting the US government access to crucial insights into their confidential projects. This initiative also requires these companies to furnish information regarding safety testing conducted on their emerging AI innovations.

OpenAI, in particular, has been rather discreet about its work on a potential successor to its current flagship offering, GPT-4. The US government may soon become the first to receive alerts regarding the commencement of work or safety testing for GPT-5. OpenAI, however, has not yet provided any official response to these developments.

We’re utilizing the Defense Production Act, a presidential authority, to institute a survey that mandates companies to share details every time they embark on training a new large language model, along with providing the safety data for our review,” remarked Gina Raimondo, the US Secretary of Commerce, during a recent event held at Stanford University’s Hoover Institution. The specific timeline for the implementation of this requirement and the government’s course of action with the received information remain undisclosed, with further details expected in the coming week.

These stringent measures have been set in motion as part of a comprehensive White House executive order issued in October. This order tasked the Commerce Department with formulating a framework that necessitates companies to inform US officials about the development of potent new AI models. The information to be disclosed encompasses the scale of computing power employed, data ownership, and safety testing.

The October order stipulates that reporting to the Commerce Department will be required when AI models exceed a computational threshold of 100 septillion floating-point operations per second (flops), with a lower threshold of 1,000 times for large language models engaged in DNA sequencing data. While the exact computing power used by OpenAI and Google for GPT-4 and Gemini, respectively, remains undisclosed, it is suggested that 1026 flops may slightly surpass what was employed for GPT-4.

Additionally, the Commerce Department is poised to implement another requirement outlined in the October executive order, compelling cloud computing providers like Amazon, Microsoft, and Google to inform the government when foreign companies employ their resources to train large language models, provided they surpass the initial 100 septillion flops threshold.

This announcement coincided with Google’s unveiling of data showcasing the prowess of its latest AI model, Gemini, which outperformed OpenAI’s GPT-4 in certain industry benchmarks. Should Google’s upcoming project leverage a substantial share of its cloud computing resources, the Commerce Department may receive early notifications regarding the successor to Gemini.

While the field of AI witnessed rapid progress last year, prompting calls from experts and industry leaders to temporarily halt the development of AI models more potent than GPT-4, the challenge for the federal government lies in recognizing that a model’s potential danger may not solely hinge on surpassing a computational threshold during training.

In light of recent AI advancements and concerns about its capabilities, these reporting requirements are considered proportionate by experts. “Companies are spending many billions on AI training, and their CEOs are warning that AI could be superintelligent in the next couple of years,” noted Dan Hendrycks, director of the nonprofit Center for AI Safety. “It seems reasonable for the government to be aware of what AI companies are up to.”

Echoing this sentiment, Anthony Aguirre, executive director of the Future of Life Institute, stressed the need for more comprehensive AI regulation and oversight. He emphasized, “Reporting those AI training runs and related safety measures is an important step. But much more is needed. There is strong bipartisan agreement on the need for AI regulation, and hopefully, Congress can act on this soon.”

Secretary Raimondo further disclosed that the National Institutes of Standards and Technology (NIST) is actively working to establish standards for testing the safety of AI models as part of the forthcoming US government AI Safety Institute. These standards aim to assess the risks associated with AI models by subjecting them to rigorous testing, commonly known as “red teaming.” Additionally, Raimondo highlighted the development of guidelines to help companies identify potential risks within their AI models, including measures to prevent human rights abuses. While the executive order sets a deadline of July 26 for the NIST to establish these standards, some experts working with the agency have raised concerns about their feasibility due to financial and expertise limitations.

Conclusion:

These new regulations signify a significant shift in the AI landscape, with the US government taking proactive steps to monitor and regulate AI projects by major tech players. This increased transparency and oversight could lead to more responsible AI development, fostering trust in the market and ensuring compliance with emerging regulations. Companies will need to adapt to these requirements and invest in safety testing and compliance to navigate this evolving landscape successfully.

Source