Thales Achieves Success in Sovereign AI Hack Challenge and Unveils Robust Security Solutions for Military and Civil AI

TL;DR:

  • Thales excels in the French Ministry of Defence’s AI security challenge, showcasing their prowess in AI security.
  • The CAID challenge involved identifying training images and defeating “unlearning” techniques.
  • Thales’s Friendly Hackers team exposed vulnerabilities in AI training data and models.
  • The BattleBox suite offers robust AI cybersecurity measures, including protection against data poisoning and prompt injection attacks.
  • Thales emphasizes the importance of AI security in military contexts.
  • Thales’s comprehensive AI solutions prioritize explainability, integration, sovereignty, cost-effectiveness, and reliability.
  • Thales combines AI expertise with defense sector know-how, fostering AI excellence.
  • Thales’s Information Technology Security Evaluation Facility (ITSEF) is at the forefront of AI security assessments.

Main AI News:

In the realm of AI security, Thales has emerged triumphant in the recent challenge set forth by the French Ministry of Defence. This contest, known as the CAID challenge, presented a dual task for participants:

  1. Discern which images were utilized for training the AI algorithm and which were reserved for testing within a specified image dataset. Thales’s Friendly Hackers team delved deep into the intricate workings of the AI model, successfully identifying key images used in the application’s training phase. This revelation unveiled valuable insights into the training methodologies employed and the overall quality of the model.
  2. Locate aircraft images employed by an AI algorithm safeguarded using “unlearning” techniques. Unlearning techniques involve the removal of training data, such as images, to safeguard their confidentiality. This approach is pivotal in upholding the sovereignty of an algorithm, especially when it pertains to potential export, theft, or loss. For instance, an AI-equipped drone must distinguish enemy aircraft as threats while recognizing its own army’s aircraft as friendly. The unlearning process erases the latter data to prevent extraction for malicious purposes. Astonishingly, Thales’s Friendly Hackers team managed to re-identify data presumed to be erased, effectively bypassing the unlearning process.

Exercises of this nature serve to gauge the susceptibility of training data and trained models, which are invaluable assets but also potential points of vulnerability for military operations. Attacks on training data or models could have dire consequences, offering adversaries a distinct advantage. These risks encompass model theft, compromise of data used for military hardware recognition, and the introduction of backdoors to disrupt AI-dependent systems. While AI, especially generative AI, offers substantial operational benefits, safeguarding this technology against emerging threats is of paramount importance for the national defense community.

Thales’s Innovative Approach to Address AI Vulnerabilities

In the defense sector, safeguarding training data and trained models is of utmost importance. The field of AI cybersecurity is evolving rapidly and requires autonomous defenses to counteract the myriad opportunities presented to malicious actors in the realm of AI. To combat these risks and threats, Thales has introduced a comprehensive suite of countermeasures known as the “BattleBox,” designed to offer enhanced protection against potential breaches:

  1. BattleBox Training shields against training-data manipulation, preventing hackers from introducing backdoors.
  2. BattleBox IP digitally embeds watermarks into the AI model, ensuring authenticity and reliability.
  3. BattleBox Evade aims to thwart prompt injection attacks on models, which can manipulate prompts to bypass safety measures in chatbots employing Large Language Models (LLMs). It also safeguards against adversarial attacks on images, such as the addition of patches to deceive classification models.
  4. BattleBox Privacy establishes a secure framework for training machine learning algorithms, leveraging advanced cryptography and secure secret-sharing protocols to ensure high levels of confidentiality.

In the context of the CAID challenge, encryption of the AI model emerges as a potential solution to counter AI hacking.

“AI offers substantial operational advantages, but it necessitates robust security measures to prevent data breaches and misuse. Thales provides a comprehensive array of AI-based solutions for both civilian and military applications. These solutions are designed to be explainable, integrable into critical systems, and most importantly, sovereign, cost-effective, and reliable, thanks to advanced qualification and validation methods and tools. Thales possesses the dual expertise in AI and specific domains necessary to seamlessly integrate these solutions, significantly enhancing operational capabilities,” stated David Sadek, Thales VP Research, Technology & Innovation, responsible for Artificial Intelligence.

Thales’s Commitment to AI Excellence

As Thales’s defense and security divisions tackle critical requirements with profound implications for safety, the company has established a rigorous ethical and scientific framework for the development of trusted AI. This framework is built on four strategic pillars: validity, security, explainability, and responsibility. Thales’s solutions harness the expertise of over 300 senior AI experts and more than 4,500 cybersecurity specialists, synergizing with the operational proficiency of the Group’s aerospace, land defense, naval defense, space, and other defense and security endeavors.

Thales has cultivated the technical capabilities essential for testing the security of AI algorithms and neural network architectures. They excel in identifying vulnerabilities and proposing effective countermeasures. Thales’s Friendly Hackers team, stationed at the ThereSIS laboratory in Palaiseau, stood at the forefront of the AI challenge, securing the top position in both tasks.

Further underscoring its commitment to AI security, Thales’s Information Technology Security Evaluation Facility (ITSEF) is accredited by the French National Cybersecurity Agency (ANSSI) to conduct pre-certification security assessments. During the European Cyber Week, the ITSEF team unveiled a groundbreaking project aimed at compromising the decisions of embedded AI by exploiting the electromagnetic radiation of its processor.

Thales’s cybersecurity consulting and audit teams extend these tools and methodologies to customers seeking to develop their own AI models or establish robust frameworks for deploying and training commercial models. Thales remains steadfast in its dedication to advancing AI technology while ensuring its security and resilience in an ever-evolving landscape.

Conclusion:

Thales’s impressive performance in the AI security challenge underscores the critical need for robust AI cybersecurity solutions in the defense sector. Their BattleBox suite not only addresses current vulnerabilities but also sets a high standard for AI protection. As AI continues to play a pivotal role in military and civilian applications, Thales’s innovations position them as leaders in securing the AI landscape, ensuring both operational advantages and data integrity for their clients.

Source