Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Security leaders and CISOs are discovering that a growing swarm of shadow AI apps has been compromising their networks, in some cases for over a year.

They’re not the tradecraft of typical attackers. They are the work of otherwise trustworthy employees creating AI apps without IT and security department oversight or approval, apps designed to do everything from automating reports that were manually created in the past to using generative AI (genAI) to streamline marketing automation, visualization and advanced data analysis. Powered by the company’s proprietary data, shadow AI apps are training public domain models with private data.

What’s shadow AI, and why is it growing?

The wide assortment of AI apps and tools created in this way rarely, if ever, have guardrails in place. Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage.

It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,”  Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.”

“We see 50 new AI apps a day, and we’ve already cataloged over 12,000,” said Itamar Golan, CEO and cofounder of Prompt Security, during a recent interview with VentureBeat. “Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models.”

The majority of employees creating shadow AI apps aren’t acting maliciously or trying to harm a company. They’re grappling with growing amounts of increasingly complex work, chronic time shortages, and tighter deadlines.

As Golan puts it, “It’s like doping in the Tour de France. People want an edge without realizing the long-term consequences.”

A virtual tsunami no one saw coming

“You can’t stop a tsunami, but you can build a boat,” Golan told VentureBeat. “Pretending AI doesn’t exist doesn’t protect you — it leaves you blindsided.” For example, Golan says, one security head of a New York financial firm believed fewer than 10 AI tools were in use. A 10-day audit uncovered 65 unauthorized solutions, most with no formal licensing.

Arora agreed, saying, “The data confirms that once employees have sanctioned AI pathways and clear policies, they no longer feel compelled to use random tools in stealth. That reduces both risk and friction.” Arora and Golan emphasized to VentureBeat how quickly the number of shadow AI apps they are discovering in their customers’ companies is increasing.

Further supporting their claims are the results of a recent Software AG survey that found 75% of knowledge workers already use AI tools and 46% saying they won’t give them up even if prohibited by their employer. The majority of shadow AI apps rely on OpenAI’s ChatGPT and Google Gemini.

Since 2023, ChatGPT has allowed users to create customized bots in minutes. VentureBeat learned that a typical manager responsible for sales, market, and pricing forecasting has, on average, 22 different customized bots in ChatGPT today.

It’s understandable how shadow AI is proliferating when 73.8% of ChatGPT accounts are non-corporate ones that lack the security and privacy controls of more secured implementations. The percentage is even higher for Gemini (94.4%). In a Salesforce survey, more than half (55%) of global employees surveyed admitted to using unapproved AI tools at work.

“It’s not a single leap you can patch,” Golan explains. “It’s an ever-growing wave of features launched outside IT’s oversight.” The thousands of embedded AI features across mainstream SaaS products are being modified to train on, store and leak corporate data without anyone in IT or security knowing.

Shadow AI is slowly dismantling businesses’ security perimeters. Many aren’t noticing as they’re blind to the groundswell of shadow AI uses in their organizations… continue reading VentureBeat

By Louis Columbus