Pentagon Seeks to Control the Technical Baseline for AI Technology, Says R&D Official

TL;DR:

  • The Pentagon will host a conference on “Trusted AI and Autonomy” to address the question of whether the Defense Department can rely on AI for future missions.
  • The conference aims to bridge the gap between the Department and the private sector in AI advancements and explore how the military can adopt commercial technology to build trusted AI capabilities.
  • The “Technical Baseline” is a term of art used in the Pentagon to ensure control over its AI systems, but this objective clashes with AI innovators’ desire to protect trade secrets.
  • The conference will bring together attendees from industry, academia, and the defense sector and will explore the potential and limitations of generative AI, such as LLMs, as well as topics like cybersecurity, command systems, and the Pentagon’s revised policy directive on the control, reliability, and ethics of autonomous weapons.
  • The limitations of AI have been a concern for military leaders for years, and the conference aims to explore ways to build trust in AI and find solutions for the military to effectively and safely adopt this technology in its operations.
  • The military’s goal of building trust in AI involves exploring innovative solutions, such as reinforcement learning with human feedback (RLHF) and constitutional AI.
  • Despite the accuracy of AI continuing to improve, it is unlikely to reach 100% accuracy, and the military will have to assign a risk and accuracy percentage around AI and ensure people are comfortable with it.

Main AI News:

In the coming weeks, the Pentagon will extend invitations to key players in the defense, industry, and academic sectors for a pioneering conference on “Trusted AI and Autonomy.” As revealed in an exclusive interview with Breaking Defense, the conference aims to address the pressing question of whether the Defense Department can rely on AI for a wide range of future missions.

Maynard Holliday, the Pentagon’s Deputy CTO for Critical Technologies, acknowledges the Department’s lagging behind the private sector in AI advancements. The conference aims to bridge this gap by not only providing a better understanding of the latest developments in AI but also exploring how the military can adopt and adapt commercial technology to build trusted and controllable AI capabilities.

The Department recognizes the need for swift action in this field but also understands the importance of developing military-specific applications of these commercial technologies,” stated Holliday. “As Under Secretary LaPlante has emphasized in the past, it is crucial for the Department to maintain control over the technical baseline of these technologies, avoiding vendor lock-in and ensuring the evolution of these capabilities aligns with military requirements.

Maynard Holliday. Source: U.S. Army photo by William Pratt

The “Technical Baseline” serves as a crucial foundation for defining complex systems and guiding their design, development, and evolution. This is a term of art used in the Pentagon to ensure control over its AI systems.

However, this objective clashes with the AI innovators’ desire to protect trade secrets, as seen in companies like OpenAI that have not disclosed any data about their latest AI bot, ChatGPT-4. This cloud-based approach, where AI algorithms run on the company’s servers and users only see the queries and responses, is becoming increasingly common in the industry.

To reconcile these conflicting interests, the DoD must find a solution that balances its desire for control and the private industry’s protection of intellectual property. As Holliday acknowledges, “We will need to develop our own militarily-specific and DoD-specific corpus of data that is updated with our information and jargon so that we can interact seamlessly and trust it.”

The Pentagon-hosted conference on “Trusted AI and Autonomy” is tentatively scheduled for June 20-22 at the MITRE facility in McLean, Virginia, and will be hosted by Pentagon R&D Chief Heidi Shyu. Attendees will have the opportunity to hear from the Acquisition Under Secretary, Bill LaPlante, and Chief Data & AI Officer, Craig Martell, as well as from leading innovators in the AI industry.

The conference will bring together a broad network of attendees from industry, academia, and the defense sector, with Holliday expecting attendance to reach “triple digits.” Preparation for the conference began late last year and was first publicly mentioned by Undersecretary Shyu at a George Mason University forum in November.

Holliday stated that the conference will examine the perils and potential of “generative artificial intelligence” such as ChatGPT, as well as mitigation of its “hallucinatory tendencies.” However, the agenda is much broader, covering topics like cybersecurity, command systems, and the Pentagon’s revised policy directive on the control, reliability, and ethics of autonomous weapons.

The conference provides a unique opportunity for Pentagon leaders to listen to and engage with innovators driving the AI revolution as they explore solutions to reconcile the Pentagon’s desire for control and the industry’s protection of intellectual property.

The conference will also delve into the wider realm of generative AI, with a focus on LLMs as just one aspect of the technology. LLMs are capable of digesting and generating text, while other algorithms can scan thousands of pictures and generate new images, as seen in art generators like Stable Diffusion and DALL-E.

Holliday sees the potential for combining different modalities, such as electro-optics, infrared, and cyber data, to reduce hallucinations and enhance the capabilities of generative AI. This multi-modal approach could also be the key to ensuring reliability in generative AI, enabling it to provide “decision support” in a rapidly changing data environment.

The inputs to such a generative AI system could include intelligence on threats, the status of the friendly force’s “kill web,” and other relevant data. The AI would then generate options for commanders, providing AI-assisted “battle management,” which is a central goal for the Pentagon’s Joint All Domain Command and Control (JADC2) initiative.

The conference provides a platform for exploring the potential and limitations of generative AI and how it can support the Pentagon’s mission to provide decision support and enhance its capabilities in the rapidly evolving field of AI.

Building Trust in AI: A Long Journey Ahead 

The limitations of AI have been a concern for military leaders for years, with the 2015-2016 Defense Science Board study on autonomy serving as a catalyst for the issue. During briefings with combatant commands, military leaders expressed their hesitation to use AI and autonomy unless they could trust it.

According to Holliday, it will take a significant amount of time for commanders and front-line soldiers to become comfortable with querying AI systems for situational awareness. The military must be able to confirm that the information provided by the AI is accurate and reliable.

Despite these limitations, AI may be the only solution for an increasing number of important missions, such as missile defense and cybersecurity, where threats can move too quickly for human response. The Pentagon recognizes the need to depend on some form of AI with a continuum of capability, as future weapon systems, such as hypersonics, directed energy, and cyber effects, will be moving faster than human decision-making.

While AI-driven management of the entire battle may not be feasible, at the very least, some level of autonomy is necessary at the defensive level to react at machine speed. The conference on “Trusted AI and Autonomy” aims to explore ways to build trust in AI and find solutions for the military to effectively and safely adopt this technology in its operations.

The military’s goal of building trust in AI involves exploring innovative solutions, such as reinforcement learning with human feedback (RLHF) and constitutional AI. RLHF, currently in use with ChatGPT and other generative AIs, incentivizes the AI to improve its performance based on human ratings of its output. However, this approach is labor-intensive and may not prevent all bad behaviors due to human bias in the training data.

Constitutional AI, on the other hand, requires human involvement at the start to draft a set of principles in machine-readable terms but then uses a computerized “constitution” to automatically rate the AI’s outputs as good or bad. This approach is faster and cheaper than RLHF but still far from perfect.

According to Holliday, the accuracy of AI will continue to improve, but it is unlikely to reach 100% accuracy. The military will have to assign a risk percentage and accuracy percentage around AI and ensure that people are comfortable with it. The conference on “Trusted AI and Autonomy” will provide a platform for discussing and exploring ways to build trust in AI and ensure its reliability in life-and-death decision-making scenarios.

Conlcusion:

The Pentagon is hosting a conference on “Trusted AI and Autonomy” to address the issue of whether the Defense Department can rely on AI for future missions. The conference aims to bring together key players from industry, academia, and the defense sector to explore the potential and limitations of generative AI, such as LLMs, and to find solutions for the military to effectively and safely adopt AI technology in its operations.

The Pentagon’s goal of building trust in AI involves exploring innovative solutions such as RLHF and constitutional AI. However, despite the accuracy of AI continuing to improve, it is unlikely to reach 100% accuracy, and the military will have to assign a risk and accuracy percentage around AI and ensure people are comfortable with it. The conference provides a platform for exploring ways to build trust in AI and ensure its reliability in life-and-death decision-making scenarios.

Source