TL;DR:
- Healthcare sector closely follows the tech industry in employee AI usage but lacks security investment.
- A global survey of 1,200 IT and security leaders reveals concerning statistics about AI tool management.
- Healthcare organizations are the second-highest users of generative AI tools (73%), behind the technology industry (85%).
- Fewer than half of healthcare organizations have monitoring technology (44%) or governance policies (42%) for AI.
- Healthcare IT decision makers express high confidence (82%) in their ability to protect against AI threats.
- Concerns in healthcare focus on the exposure of personally identifiable information (47%) and trade secrets/IP (40%).
- Globally, 74% are positive about investing in generative AI security, but the UK’s response is less enthusiastic (49%).
Main AI News:
In the realm of AI adoption, the healthcare sector finds itself in a curious position, closely trailing behind the bustling tech arena in terms of employee engagement with AI technology. This intriguing revelation stems from a global survey conducted by Censuswide, shedding light on an unsettling reality: the healthcare sector invests the least in safeguarding AI applications.
This comprehensive survey canvassed the insights of 1,200 IT and security leaders worldwide, commissioned by ExtraHop. It unveiled a disconcerting panorama of how organizations presently manage and govern generative AI tools, as well as their future intentions in this regard.
Within the healthcare sector, a staggering 73% of respondents affirmed that their employees frequently or occasionally employ generative AI tools and large language models (LLM). Remarkably, this places healthcare as the second most enthusiastic adopter of such tools, trailing only the technology industry, where a staggering 85% of employees engage with AI. Conversely, the government sector holds the tail-end position, with a relatively modest 55% utilization rate.
However, the widespread adoption of AI applications and tools in healthcare does not align with a commensurate commitment to their management. Alarming statistics reveal that fewer than half of healthcare organizations have either monitoring technology (44%) or governance policies (42%) in place, leaving a significant void in securing this transformative technology.
Global IT decision makers within the healthcare sector exude a high degree of confidence in their ability to shield against AI threats, with an impressive 82% expressing their agreement. Meanwhile, 39% provide employees with training on acceptable AI usage, and 30% have chosen the path of outright prohibition. In stark contrast, the UK’s confidence levels are less sanguine, with 43% harboring doubts about their ability to fend off AI threats.
When probed about their foremost concerns regarding generative AI, healthcare leaders are preoccupied with the potential exposure of personally identifiable information (47%), closely followed by the apprehension of trade secrets or intellectual property (40%). Remarkably, only a mere 1% claimed to harbor no concerns at all.
The global landscape presents an overwhelmingly positive sentiment towards investing in generative AI security measures, with a resounding 74% endorsing the notion. Nevertheless, the UK’s response to this proposition is tepid, with fewer than half of respondents, a mere 49%, expressing agreement. Notably, the UK also boasts the lowest adoption rate, with nearly half of respondents reporting that employees rarely (35%) or never (11%) utilize AI tools.
Conclusion:
The healthcare sector’s robust adoption of AI tools juxtaposed with insufficient security investments underscores a critical gap. While healthcare IT decision makers express confidence in AI threat protection, concerns about data exposure persist. The global sentiment toward investing in AI security appears positive, but the UK’s more reserved response suggests a need for strategic considerations in this evolving market.