TL;DR:
- NIST launches the US AI Safety Institute Consortium.
- Seeking collaborators with expertise in trustworthy AI development.
- Key objectives include companion resources for generative AI, content differentiation guidance, and AI evaluation benchmarks.
- Consortium fosters collaboration among government, industry, and communities.
- Aims to establish measurement science for safe and trustworthy AI.
- Activities are expected to commence after December 4, with a workshop on November 17.
Main AI News:
In response to the recent executive order on technology issued by the Biden administration, the Department of Commerce’s National Institute of Standards and Technology (NIST) has unveiled the U.S. AI Safety Institute Consortium. This consortium is poised to play a pivotal role in NIST’s endeavors to fulfill its newly assigned responsibilities under the executive order (EO 14110). NIST is actively seeking collaborators to join this consortium, with a particular focus on organizations possessing expertise in the development and deployment of trustworthy AI, as well as those involved in creating models or products that support such AI systems.
Announced during the U.K. AI Safety Summit 2023, the U.S. AI Safety Institute Consortium represents a core element of NIST’s ambitious AI safety initiatives. The consortium aims to foster close collaboration among government agencies, private companies and impacted communities to ensure the safety and trustworthiness of AI systems.
Under the executive order, NIST has been tasked with several critical objectives, including the development of a companion resource to its AI Risk Management Framework, specifically tailored for generative AI. Additionally, NIST is expected to provide guidance on distinguishing between human-generated and AI-generated content, as well as to establish benchmarks for AI evaluation and auditing.
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology, emphasized the significance of the U.S. AI Safety Institute Consortium, stating that it will facilitate essential collaboration to guarantee the safety and trustworthiness of AI systems.
Furthermore, NIST has outlined the consortium’s mission in a frequently asked questions page, highlighting its role in shaping a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics. These measures will, in turn, promote the development and responsible utilization of safe and trustworthy AI.
NIST has set the stage for the consortium’s activities to commence once a sufficient number of organizations have submitted letters of interest that meet the stipulated requirements, with a target start date no earlier than December 4. Additionally, NIST will host a workshop for interested organizations on November 17, providing a platform for collaboration and engagement in this vital initiative.
Conclusion:
The launch of NIST’s U.S. AI Safety Institute Consortium signifies a significant step towards ensuring the safety and trustworthiness of AI systems. This collaborative effort involving government agencies, private companies, and communities will play a crucial role in shaping the future of AI, setting standards, and fostering responsible development. This development underscores the increasing importance of AI safety in the market, with potential implications for the adoption and regulation of AI technologies. Businesses operating in the AI sector should closely monitor and engage with this initiative to stay aligned with evolving industry standards and practices.