TL;DR:
- IEEE Trojan Removal Competition (TRC ’22) reveals the winning team’s highly effective solution
- IEEE TRC ’22 encourages research on enhancing the security of neural networks
- Six teams outperformed the state-of-the-art baseline metrics.
- Classic techniques for mitigating backdoor impacts can result in low model performance.
- A generalized approach to mitigating attacks on neural networks is not advisable.
- Insights from the competition and public benchmark data will contribute to making AI and machine learning safer.
- The second year of competition is planned to further enhance the security parameters of neural networks
- Emerging Technology Fund supports strengthening iterative developments to enhance the security of machine learning and AI platforms.
Main AI News:
At the recently held virtual Backdoor Attacks and Defenses in Machine Learning (BANDS) workshop during The Eleventh International Conference on Learning Representations (ICLR), the IEEE Trojan Removal Competition revealed the outstanding success of its winning team from the Harbin Institute of Technology in Shenzhen, with set HZZQ Defense. Evaluated on clean accuracy, poisoned accuracy, and attack success rate, the team’s solution proved highly effective, resulting in a remarkable 98.14% poisoned accuracy rate and only a 0.12% attack success rate. As a result, the team will receive the coveted first-place prize of $5,000 USD.
According to Prof. Meikang Qiu, chair of IEEE Smart Computing Special Technical Committee (SCSTC) and a full professor of Beacom College of Computer and Cyber Science at Dakota State University, Madison, S.D., U.S.A., “The IEEE Trojan Removal Competition is a fundamental solution to improve the trustworthy implementation of neural networks from implanted backdoors.” He further highlighted the competition’s importance, as it encourages research and development efforts toward addressing an underexplored yet significant issue.
With the establishment of the IEEE CS Emerging Technology Fund in 2022, the organization awarded $25,000 USD to IEEE SCSTC for the “Annual Competition on Emerging Issues of Data Security and Privacy (EDISP),” which resulted in the birth of the IEEE Trojan Removal Competition (TRC ’22). Unlike most existing competitions that only focus on backdoor model detection, the TRC ’22 encouraged participants to explore solutions that could enhance the security of neural networks.
Through the development of general, effective, and efficient white box trojan removal techniques, participants contributed to building trust in deep learning and artificial intelligence, especially for pre-trained models in the wild, which is crucial to protecting artificial intelligence from potential attacks.
The IEEE Trojan Removal Competition (TRC ’22) witnessed 44 teams worldwide submitting 1,706 valid entries, with six teams successfully developing techniques that outperformed the state-of-the-art baseline metrics published in top machine-learning venues. The competition’s benchmarks summarizing the models and attacks used are now being released to enable further research and evaluation, with the hope that it will provide diverse and easy access to model settings for individuals developing new AI security techniques.
According to Yi Zeng, the competition chair of the IEEE TRC’22 and a research assistant at Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, Va., U.S.A., “This competition has yielded new data sets consisting of trained poisoned pre-trained models that are of different architectures and trained on diverse kinds of data distributions with really high attack success rates, and now developers can explore new defense methods and get rid of remaining vulnerabilities.”
The competition also highlighted two crucial findings. Firstly, classic techniques for mitigating backdoor impacts can overcorrect, resulting in low model performance. Secondly, existing techniques are of low generalizability, with some methods being effective only on certain data sets or specific machine learning model architectures. These findings suggest that a generalized approach to mitigating attacks on neural networks is currently not advisable.
Zeng stressed the urgent need for a comprehensive AI security solution, stating, “Ensuring the security of these systems becomes increasingly critical as we continue to witness the widespread impact of pre-trained foundation models on our daily lives. We hope that the insights gleaned from this competition, coupled with the release of the benchmark, will galvanize the community to develop more robust and adaptable security measures for AI systems.”
In the rapidly advancing field of AI and machine learning, it is essential to address the security and privacy concerns that these technologies bring. According to Qiu, the IEEE TRC ’22 competition for EDISP has made a significant impact in this area. He also expressed his gratitude to his colleagues on the steering committee for their valuable support.
The insights and ideas generated during the competition, along with the public benchmark data, will contribute to making the future of machine learning and artificial intelligence safer and more dependable. The team is planning to run the competition for a second year, which will further enhance the security parameters of neural networks.
Nita Patel, the 2023 IEEE Computer Society President, noted that this is precisely the kind of work the Emerging Technology Fund aims to support. It will go a long way toward strengthening iterative developments that will enhance the security of machine learning and AI platforms as the technologies continue to advance.
Conlcusion:
The success of the IEEE Trojan Removal Competition (TRC ’22) and the establishment of the IEEE CS Emerging Technology Fund underscore the growing importance of addressing the security and privacy concerns of AI and machine learning technologies. The competition’s benchmarks and findings provide critical insights and ideas that will contribute to making these technologies safer and more dependable.
As such, businesses operating in the AI and machine learning market should take note of these developments and prioritize the integration of robust and adaptable security measures into their systems. Doing so will not only protect against potential attacks but also build trust with customers and stakeholders, creating a competitive advantage in an increasingly crowded market.