China’s Military AI Collaborates with Commercial Language Models to Enhance Human Understanding

TL;DR:

  • Chinese scientists are using experimental military AI to better understand human adversaries.
  • Collaboration with large language models similar to ChatGPT enhances AI capabilities.
  • Baidu distances itself from the project, stating no affiliation.
  • Military AI can convert sensor data into descriptive language and images, facilitating human-machine interactions.
  • Concerns exist regarding potential risks and the need for careful handling.
  • The research project is detailed in a peer-reviewed paper from December 2023.
  • The goal is to make military AI more humanlike and adept at communication.
  • Commercial language models offer the potential for deeper human understanding.
  • Experiments demonstrate the AI’s ability to predict military moves and compensate for human cognitive biases.
  • Challenges remain in communication between military and commercial models.
  • The US military is also exploring similar technologies.

Main AI News:

Chinese scientists are embarking on a groundbreaking endeavor, leveraging experimental military artificial intelligence (AI) to better comprehend the intricacies of dealing with unpredictable human adversaries. Collaborating with large language models reminiscent of ChatGPT, researchers at a research laboratory within the People’s Liberation Army’s (PLA) Strategic Support Force are employing Baidu’s Ernie and iFlytek’s Spark for their AI system tests.

Baidu, however, has distanced itself from the project, asserting that it maintains no affiliation or partnership with the academic institution involved. The company further clarified that any usage of their large language model would have been the publicly available version.

This military AI initiative possesses the capability to transform vast volumes of sensor data and frontline unit reports into comprehensive narratives or visual representations, which are then transmitted to commercial models. Following confirmation of comprehension, the military AI autonomously generates prompts for deeper exchanges, particularly in the realm of combat simulations, all without human intervention.

Nevertheless, one computer scientist has sounded a note of caution, suggesting that a lack of careful handling could potentially lead to a scenario reminiscent of the Terminator film franchise.

The research project’s details have been disclosed in a peer-reviewed paper published in December 2023 in the Chinese academic journal, Command Control & Simulation. In the paper, project scientist Sun Yifeng and his team from the PLA’s Information Engineering University argue that both humans and machines stand to benefit significantly from their efforts.

The simulation results assist human decision-making… and can be used to refine the machine’s combat knowledge reserve and further improve the machine’s combat cognition level,” they wrote.

This marks the first public acknowledgment by the Chinese military of their use of commercial large language models. Typically, military information facilities remain isolated from civilian networks for security reasons. Sun’s team, while not divulging specific linkages between the two systems in the paper, emphasized the preliminary and research-oriented nature of their work.

Sun and his colleagues are committed to imbuing military AI with a more “humanlike” quality, enabling it to better grasp the intentions of commanders at all echelons and engage in more effective communication with humans. Existing military AI systems, rooted in traditional war gaming, though rapidly advancing, often lack the human touch that users seek.

When confronting cunning and unpredictable human adversaries, machines are susceptible to deception. Commercial large language models, having extensively studied various facets of society, including literature, news, and history, offer the potential to grant military AI a deeper understanding of human behavior.

In one experiment discussed in their paper, Sun’s team simulated a US military invasion of Libya in 2011. The military AI shared information about both armies’ weaponry and deployments with the large language models. After multiple rounds of interaction, the models successfully predicted the next moves of the US military.

Sun’s team contends that such predictions can compensate for human cognitive imperfections, which may lead to underestimating or overestimating threats on the battlefield. They emphasize that machine-assisted situational awareness is a pivotal direction for development.

Nonetheless, Sun’s team acknowledges that communication between military and commercial models poses challenges. The latter was not specifically designed for warfare, occasionally providing vague forecasts that fall short of the specific information military commanders require.

In response, the team has explored multi-modal communication methods. One such approach entails military AI generating a detailed military map, which is then analyzed more deeply by iFlyTek’s Spark. Researchers have observed that this illustrative approach significantly enhances the performance of large language models, enabling them to produce analysis reports and predictions that align with practical military applications.

The paper hints that the disclosed information is just the tip of the iceberg in this ambitious project. Critical experiments, such as the mutual acquisition of knowledge and skills between military and commercial models, remain undisclosed.

China is not alone in such research endeavors. Numerous US military leaders have expressed interest in ChatGPT and similar technologies, tasking research institutions and defense contractors with exploring their potential applications in intelligence analysis, psychological warfare, drone control, and communication code decryption.

However, a Beijing-based computer scientist cautions that while military AI utilization is inevitable, it demands extreme vigilance. The current generation of large language models is more potent and sophisticated than ever, posing potential risks if granted unrestricted access to military networks and confidential equipment knowledge.

We must tread carefully. Otherwise, the scenario depicted in the Terminator movies may really come true,” the scientist warns.

Conclusion:

The collaboration between China’s military AI and commercial language models marks a significant step towards enhancing the capabilities of military AI systems. This development underscores the growing interest in leveraging advanced language models for military applications, which could have implications for the broader market, particularly in the fields of AI-driven defense technologies and human-AI interaction solutions. However, it also raises concerns about security and the need for cautious implementation to mitigate potential risks.

Source