- Over a dozen tech firms, including OpenAI, Nvidia, Google, and others, have formed the Coalition for Secure AI (CoSAI) at the Aspen Security Forum.
- CoSAI will operate under OASIS, a nonprofit known for managing open-source cybersecurity projects.
- Founding members include major players such as Amazon Web Services, Microsoft, and Google, as well as other tech giants like Intel and Cisco.
- The coalition’s goals are to develop security tools for AI applications and to create a platform for sharing cybersecurity best practices.
- Three main initiatives will be launched: identifying risks in machine learning workloads, mitigating AI cybersecurity risks, and addressing software supply chain vulnerabilities.
- Future plans include tackling cybersecurity risks from third-party AI models, with further initiatives supervised by a technical steering committee of AI experts.
Main AI News:
In a major development at the Aspen Security Forum, over a dozen leading technology firms have united to form a new industry group focused on enhancing the security of artificial intelligence applications. The newly established Coalition for Secure AI (CoSAI) will operate under the auspices of OASIS, a nonprofit organization known for managing numerous open-source software projects aimed at improving cybersecurity, including automating breach response workflows.
The coalition’s founding members include top-tier entities such as OpenAI and Anthropic PBC, two of the most heavily funded startups in the large language model space, alongside competitors Cohere Inc. and GenLab. In the public cloud sector, major supporters include Amazon Web Services Inc., Microsoft Corp., and Google LLC. Other significant contributors are Nvidia Corp., Intel Corp., IBM Corp., Cisco Systems Inc., PayPal Holdings Inc., Wiz Inc., and Chainguard Inc.
CoSAI’s primary goals are twofold: to create tools and technical guidelines for securing AI applications and to foster an ecosystem for sharing best practices and technologies in AI cybersecurity. “The establishment of CoSAI reflects our commitment to democratizing knowledge and advancements critical for secure AI deployment,” said David LaBianca, co-chair of CoSAI’s governing board. “With OASIS Open’s support, we anticipate fruitful collaboration among leading companies, experts, and academia.”
The coalition is launching three key open-source initiatives to achieve these objectives. The first aims to assist software teams in identifying cybersecurity risks in machine learning workloads, developing a taxonomy for common vulnerabilities, and creating a cybersecurity scorecard for developers.
The second initiative focuses on simplifying the process of mitigating AI cybersecurity risks, aiming to streamline investments and techniques to address security impacts, as noted by Google cybersecurity executives Heather Adkins and Phil Venables in a blog post.
The third initiative addresses risks associated with software supply chains, specifically vulnerabilities from externally sourced components. CoSAI plans to facilitate the workflow for mapping and analyzing these components, which is crucial for detecting potential threats.
Additionally, the consortium will work on mitigating cybersecurity risks from third-party AI models, which can introduce vulnerabilities into projects reliant on open-source neural networks. CoSAI intends to launch further initiatives, overseen by a technical steering committee composed of AI experts from both the private sector and academia.
Conclusion:
The formation of CoSAI represents a significant industry effort to address the growing cybersecurity challenges associated with artificial intelligence. By uniting top tech firms and leveraging open-source solutions, the coalition aims to create robust frameworks for securing AI applications and fostering collaboration across the sector. This initiative is likely to set new standards for AI security practices, potentially influencing how organizations approach the integration and protection of AI technologies. The emphasis on shared best practices and technical guidance could drive innovation and improve resilience against cyber threats, thereby enhancing the overall security posture of AI applications in various industries.