- NIST launches NIST GenAI to assess generative AI technologies, focusing on text and image generation.
- The program aims to establish benchmarks for content authenticity detection and promote the development of software to combat AI-generated misinformation.
- NIST GenAI’s pilot study involves distinguishing between human-created and AI-generated media, starting with text summaries.
- Registration for the pilot opens on May 1, with results expected by February 2025.
- This initiative responds to President Biden’s executive order on AI transparency and marks NIST’s first major AI-related endeavor under Paul Christiano’s leadership.
Main AI News:
In a bid to assess the burgeoning field of generative AI, the National Institute of Standards and Technology (NIST) has unveiled its latest initiative, NIST GenAI. This groundbreaking program is designed to scrutinize the capabilities of generative AI technologies, encompassing both text and image generation.
NIST GenAI is set to roll out a series of benchmarks aimed at gauging the authenticity of content, including the detection of deepfakes, along with fostering the development of software tailored to pinpoint the origins of fabricated or deceptive AI-generated information. “The NIST GenAI program will present a series of challenge problems to evaluate and quantify the capabilities and constraints of generative AI technologies,” states NIST in its press release. “These assessments will serve to identify approaches to uphold information integrity and steer the prudent and ethical utilization of digital content.“
The maiden undertaking of NIST GenAI involves a pilot study aimed at crafting systems proficient in distinguishing between human-authored and AI-generated media, commencing with textual content. While numerous services claim to detect deepfakes, empirical studies and internal testing have revealed their reliability to be dubious at best, particularly concerning text-based content. NIST GenAI has extended invitations to teams from academia, industry, and research laboratories to submit either “generators” – AI systems for content generation – or “discriminators,” tasked with identifying AI-generated content.
Generators participating in the study are mandated to produce summaries of 250 words or fewer given a specific topic and a corpus of documents, whereas discriminators must ascertain whether a given summary is potentially AI-generated. To ensure impartiality, NIST GenAI will furnish the requisite data for evaluating the generators. Systems trained on publicly accessible data and those failing to comply with applicable laws and regulations will not be entertained, asserts NIST.
Registration for the pilot initiative is slated to commence on May 1, with the initial round of submissions scheduled to close by August 2. The conclusive findings from the study are anticipated to be disseminated in February 2025.
NIST GenAI’s inauguration and its focused study on combating deepfakes coincide with the exponential surge in AI-generated misinformation and disinformation. Clarity, a firm specializing in deepfake detection, reports a staggering 900% increase in the creation and dissemination of deepfakes this year compared to the corresponding period last year. This escalating trend has understandably raised alarm, with an overwhelming 85% of Americans expressing concerns about the proliferation of misleading deepfakes online, according to a recent poll by YouGov.
The launch of NIST GenAI forms part of NIST’s response to President Joe Biden’s executive order on AI, which mandates enhanced transparency from AI companies regarding the workings of their models and introduces a plethora of new standards, including for labeling AI-generated content. Additionally, it marks the first major AI-related initiative by NIST following the appointment of Paul Christiano, a former researcher at OpenAI, to helm the agency’s AI Safety Institute.
Christiano’s appointment has been met with controversy owing to his “doomerist” outlook; he once posited a “50% chance AI development could culminate in [humanity’s destruction].” Critics, purportedly including scientists within NIST, harbor concerns that Christiano may steer the AI Safety Institute toward fixating on “fantasy scenarios” rather than addressing tangible and immediate risks posed by AI. NIST asserts that NIST GenAI will serve as a valuable resource informing the endeavors of the AI Safety Institute.
Conclusion:
NIST’s launch of NIST GenAI underscores a concerted effort to address the burgeoning challenges posed by AI-generated misinformation. By establishing benchmarks and fostering collaboration between academia, industry, and research labs, NIST aims to mitigate the risks associated with deepfakes and enhance the integrity of digital content. This initiative signals a pivotal moment in the market, heralding increased scrutiny and regulation of AI technologies, which will likely spur innovation in AI safety and authentication solutions. Companies operating in this space should take heed of these developments and prioritize transparency and compliance in their AI endeavors to navigate the evolving regulatory landscape effectively.