Shadow AI: A Growing Concern for IT Leaders

The increasing use of unapproved AI tools, known as shadow AI, is causing significant concern among IT leaders, according to a report released Tuesday. This practice involves employees using AI technologies without oversight from their organization’s IT departments, creating new risks in privacy, security, and compliance.

Widespread Worry About Shadow AI Risks

A survey of 200 IT directors and executives from U.S. organizations with over 1,000 employees revealed that nearly half (46%) of respondents are “extremely worried” about shadow AI. An overwhelming 90% expressed concerns about privacy and security vulnerabilities linked to these tools.

Krishna Subramanian, co-founder of unstructured data management company Komprise, explained the scale of the problem: “Nearly 80% of IT leaders reported negative outcomes, such as sensitive data leaks, inaccuracies, and legal risks from using generative AI tools. Alarmingly, 13% noted financial or reputational harm to their organizations.”

Unlike shadow IT, which often involves unauthorized software purchases, shadow AI’s risks stem from employees using accessible tools like ChatGPT or Claude AI. These tools, Subramanian warned, may inadvertently expose sensitive company information.

Security and Compliance Challenges

James McQuiggan of KnowBe4 described shadow AI as a growing threat, creating blind spots in data protection policies. “Employees use these tools to process sensitive data without proper security checks,” he said. Melissa Ruzzi, director of AI at AppOmni, highlighted further concerns: “Some AI applications may not meet regulatory requirements or maintain adequate data storage security, increasing the risk of exposure.”

Adding to the complexity, Krishna Vishnubhotla of Zimperium noted that shadow AI often operates outside organizational control, such as on employees’ personal devices. This broadens the scope of potential data breaches and regulatory violations, with financial damages potentially reaching billions of dollars.

Rapid Adoption Compounds Risks

Shadow AI’s appeal lies in its accessibility and ease of use, explained Nicole Carignan of Darktrace. She predicts an increase in generative AI tools within enterprises, which could exacerbate compliance and data loss prevention challenges.

Subramanian pointed out the rapid pace of AI development, making it difficult for organizations to monitor and mitigate associated risks. “Managers may overlook these risks because their teams appear more productive,” she said.

The low learning curve and versatility of generative AI services make them especially attractive to employees, added Satyam Sinha of Acuvity. However, these same traits introduce significant security implications.

Striking a Balance Between Innovation and Security

Banning AI tools outright often backfires, leading to stealth usage, noted Kris Bondi of Mimoto. Instead, organizations should educate employees, establish clear protocols, and offer approved AI solutions. “Explaining why some tools are sanctioned increases compliance,” Bondi said.

Proactive measures such as employee training, AI usage monitoring, and robust security tools are essential for minimizing risks, according to Ruzzi. Companies should also integrate AI governance into their broader security programs, McQuiggan advised.

“Shadow AI is only the beginning,” he warned. “The sooner organizations act, the better equipped they will be to handle future challenges.”