The Rise of ‘Shadow AI’: A Looming Threat to Data Security

TL;DR:

  • Poor data controls and the emergence of generative AI tools based on Large Language Models (LLMs) will lead to an increase in insider data breaches.
  • Organizations often remain blind to employees using unauthorized generative AI for tasks involving sensitive data.
  • Prohibitions on using generative AI are ineffective, as employees find ways to bypass restrictions.
  • Data security should focus on securing and monitoring data repositories, rather than relying on employee compliance.
  • Key steps for organizations: visibility over data repositories, classification of data assets, and implementation of monitoring and analytics capabilities.

Main AI News:

A surge in insider threats is on the horizon, driven by the emergence of a clandestine force known as ‘Shadow AI.’ Imperva, Inc., a leading data security company, cautions that the convergence of inadequate data controls and the rise of new generative AI tools based on Large Language Models (LLMs) will result in a significant increase in insider data breaches in the upcoming year.

In response to the growing potency of LLM-powered chatbots, numerous organizations have resorted to implementing outright bans or strict limitations on the data shared with these advanced tools. Regrettably, a staggering 82 percent of organizations lack a comprehensive insider risk management strategy, rendering them oblivious to instances where employees exploit generative AI for tasks such as coding or completing requests for proposals (RFPs). Astonishingly, this often entails employees granting unauthorized applications access to sensitive data repositories.

Terry Ray, the Senior Vice President of Data Security GTM and Field CTO at Imperva, dismisses the notion of forbidding employees from utilizing generative AI, deeming it a futile endeavor. Drawing parallels with other technologies, Ray highlights how people inevitably find ways to circumvent such restrictions, leading to an incessant game of cat and mouse for security teams, all the while failing to enhance enterprise security substantially. “People don’t necessarily harbor malicious intent when causing a data breach,” Ray adds. “More often than not, they simply aim to improve their job efficiency. However, if companies turn a blind eye to LLMs accessing their backend code or sensitive data stores, it’s only a matter of time before calamity strikes.”

Imperva asserts that instead of relying on employees to refrain from using unauthorized tools, businesses must prioritize data security by ensuring they can answer crucial questions: Who is accessing the data? What data is being accessed? How is it being accessed? And from where? To this end, Imperva recommends a series of essential steps that every organization, regardless of size, should undertake:

  1. Visibility: Organizations must diligently identify and gain visibility into every data repository within their environment. This comprehensive approach ensures that valuable information stored in shadow databases does not go unnoticed or fall prey to abuse.
  2. Classification: Once organizations have compiled an inventory of their data stores, the subsequent phase involves classifying each data asset based on type, sensitivity, and value to the organization. Effective data classification enables organizations to grasp the significance of their data, identify potential risks, and determine the appropriate controls to mitigate those risks.
  3. Monitoring and Analytics: It is imperative for businesses to deploy robust data monitoring and analytics capabilities capable of detecting threats such as abnormal behavior, data exfiltration, privilege escalation, or suspicious account creation.

By adhering to these critical measures, organizations can fortify their defenses against the imminent surge in insider threats, safeguarding their data assets and minimizing the likelihood of devastating breaches. The era of Shadow AI demands a proactive approach that places data security at the forefront of business priorities.

Conclusion:

The advent of ‘Shadow AI’ driven by generative AI tools poses a significant threat to the market. The lack of insider risk management strategies and poor data controls among organizations leaves them vulnerable to insider data breaches. Prohibitions on using generative AI prove ineffective, as employees find workarounds. To mitigate these risks, businesses must prioritize data security by implementing comprehensive visibility, classification, and monitoring measures. By proactively addressing these challenges, organizations can safeguard their data assets and fortify their position in an increasingly data-driven market.

Source