- HITRUST forms the AI Assurance Working Group to enhance security and trust in AI technologies for businesses.
- The group aims to establish a model for security control assurances tailored for AI systems.
- Industry experts collaborate to ensure transparency and consistency in managing security risks associated with AI models and services.
- The initiative focuses on scalable security controls, empowering consumers and providers to understand and validate security measures.
- Efforts align with evolving regulatory requirements, leveraging established frameworks and emerging AI standards.
- Deliverables include AI risk factors, security control requirements, and a shared responsibility model.
- HITRUST advances AI Assessment and Certification, offering resources to understand AI risk factors and enhance risk management posture.
Main AI News:
In a bid to fortify the security and trustworthiness of AI technologies within the business landscape, HITRUST has officially launched the HITRUST AI Assurance Working Group. This groundbreaking endeavor seeks to establish a robust model for ensuring security control assurances tailored specifically for AI systems. By spearheading efforts to provide a clear pathway to AI Assessment and Certification, HITRUST aims to set new standards in the realm of AI security.
Comprising a consortium of industry luminaries and thought leaders from both AI providers and early adopters, the Working Group is steadfast in its commitment to fostering an ecosystem where users and providers alike can effectively manage the security risks inherent in their AI models and services. The overarching goal is to cultivate an environment characterized by transparency and consistency, thus engendering trust among stakeholders.
Central to the mission of the HITRUST AI Assurance Working Group is the imperative to develop scalable security controls that are not only diligently implemented but also demonstrably effective. By doing so, the Working Group endeavors to instill confidence in the utilization of AI technologies across diverse business landscapes. Whether it’s internally developed systems or those leveraging common large language models and service environments, the aim remains the same: to uphold stringent security standards.
A pivotal aspect of this initiative revolves around empowering consumers of AI models and other relying parties to comprehend, showcase, and validate the security protocols embedded within the services ecosystem, all within the context of prevailing business risks. Leveraging the acclaimed HITRUST shared responsibility and control inheritance model, which has garnered widespread adoption among leading cloud service providers and key players in the AI and machine learning domain, represents a cornerstone of this effort.
Given the dynamic nature of AI security and the evolving regulatory landscape, the Working Group’s scope and objectives are poised to evolve in tandem. Initial focal points include the identification of AI-specific security risks, the delineation of inherent risk factors pertaining to AI, and the formulation of a cohesive shared responsibility model, among other key areas of emphasis.
Harnessing the robust framework provided by the HITRUST CSF v 11.2 and drawing insights from emerging AI standards such as the NIST AI Risk Management Framework (RMF) and the ISO AI Risk Management Guidelines (ISO 23894), the Working Group is poised to chart a course towards a more secure AI landscape.
Robert Booker, Chief Strategy Officer for HITRUST, underscored the significance of this initiative, asserting, “As AI continues its rapid proliferation across industries, the imperative for organizations offering AI solutions to grasp the intricacies of AI-related risks, understand their attendant responsibilities in risk management, and procure reliable security assurances from service providers has never been more pressing. The Working Group stands poised to redefine the contours of AI Assurances, focusing on pragmatic, scalable, and verifiable approaches to security and risk management that inspire trust among all stakeholders.”
Comprised of a cadre of seasoned experts hailing from diverse sectors, including healthcare and technology, the Working Group is committed to scrutinizing and providing feedback on security frameworks and controls that are not only transparent but also prescriptive and scalable. This collaborative endeavor is designed to empower AI service providers and users alike to effectively manage the security risks associated with their AI systems, commensurate with the identified risk levels.
“AI adoption within the enterprise is experiencing unprecedented growth, with business leaders across all sectors seeking assurance in mitigating AI-related risks. This represents a pivotal juncture in our collective endeavor to fortify AI security and trust, and HITRUST is at the vanguard of this endeavor, laying the groundwork for actionable, real-world solutions in AI assurance within the healthcare domain,” remarked Omar Khawaja, Field CISO at Databricks. “Together with HITRUST, we are charting a course towards a future characterized by robust standards that global enterprises can rely upon.“
Aligned with the burgeoning public sector and regulatory discourse surrounding AI, the Working Group remains committed to translating emerging standards and guidance into actionable insights and implementations. Anticipated deliverables encompass a spectrum of critical components, including AI and ML inherent risk factors, AI security control requirements, the AI Security Shared Responsibility Model, and AI risk management assurance reports, among others.
This initiative marks the second phase in HITRUST’s journey towards AI Assessment and Certification in 2024, following the integration of AI risk management controls into the HITRUST Common Security Framework (CSF) v11.2.0 and the rollout of its AI Assurance Program and strategy document last year. Organizations keen on navigating the complexities of AI risk factors can leverage these resources to kickstart their planning and preparatory efforts.
Looking ahead to Q2 2024, HITRUST is slated to release its inaugural AI Insight Report tailored for organizations leveraging HITRUST Risk-based (r2) Assessment, offering a comprehensive overview of their AI risk management posture. Subsequently, in the latter half of 2024, the company aims to expand its repertoire of control requirements, unveiling a comprehensive suite of accessible assessment options tailored for organizations utilizing HITRUST Essentials (e1), Implemented (i1), and Risk-based (r2) Assessments, complemented by targeted training initiatives for its extensive assessor network.
“We are immensely gratified by the strides we’ve made in the realm of AI. Recognizing the acute need within the market for pragmatic solutions to assess AI risks, we are confident that HITRUST is uniquely positioned to address this need with alacrity, just as we’ve done in the past,” affirmed Booker.
Conclusion:
The establishment of the HITRUST AI Assurance Working Group underscores a pivotal shift towards fortified security and enhanced trust in AI technologies across industries. By championing scalable security controls and fostering transparency, HITRUST paves the way for organizations to navigate the complexities of AI risk management effectively. This initiative signifies a proactive response to burgeoning market demands, offering actionable solutions to mitigate AI-related risks and inspire confidence in the adoption of AI technologies.