Shadow AI is the use of artificial intelligence tools and systems deployed within an organization without the knowledge, approval or oversight of the IT or security departments. Think of an employee in a marketing department using ChatGPT without an enterprise license, or a software developer using their own Anthropic account. These tools are often adopted by individual employees or teams to boost productivity, but they can create security, compliance and data governance risks if left unmanaged.
FIND OUT: What are some key security considerations for embracing AI?
For Alexander Johnston, senior research analyst at 451 Research, not having adequate oversight of the AI capabilities being used at an organization may significantly impact data privacy. “While most enterprise services provide assurances as to where data is stored and how it is used, this is not always the case with the free online services that many users engage with,” Johnston says.
He suggests organizations bind AI use to specific processes, where quality assurance and security is adequately considered.
What Is AI Compliance?
“AI compliance is the same as other forms of regulatory compliance,” says Forrester Senior Analyst Alla Valente. It ensures that AI systems meet key regulations.
Companies operating in or doing business with regions like the EU must comply with regulations such as the EU AI Act, while in the U.S., a patchwork of state-level laws in places like Colorado and New York apply. “Companies that are operating in those states or have customers or employees in those states have to make sure that the AI they’re leveraging is compliant with those laws and regulations,” Valente says.
Johnston says that the “compliance umbrella also commonly accounts for ethical guidelines.” And businesses need to honor those too to avoid “legal and reputational risks.”
RELATED: How to manage compliance and risk in a modern landscape.
Why Is AI Compliance Important?
Businesses that fail to meet regulatory or internal requirements face serious financial and reputational issues. “If you’re not compliant, there are consequences that involve fines, fees penalties,” Valente says.
Not to mention lawsuits from state attorneys general, external audits and unfavorable media coverage. “Especially if it’s egregious noncompliance, you make headlines — and those are not favorable headlines,” Valente adds.
As AI becomes a growing focus of regulation, companies must extend their compliance frameworks to account for emerging laws at the state, federal and international levels.
Click the banner below to read the 2025 CDW AI Research Report.
Contracts are now being updated with AI-specific language to define what vendors can and cannot do with generative AI, and what liabilities apply if policies are breached. “If X and Y happen with generative AI outside of this requirement, then there’s a certain liability,” she explains.
Procurement teams are another critical gatekeeper in managing AI compliance. “There’s a series of steps or compliance checks they have to do,” Valente says. These steps often include validating whether a vendor is on a sanctions list, confirming financial viability, and ensuring legal and regulatory alignment.
UP NEXT: Agentic AI is revolutionizing business and everyday life.
And as with any security effort, it’s a moving target. “Even if an organization takes steps to try to identify and mitigate shadow AI inside of its organizations, this is not going to be a one-and-done endeavor,” she says.
The rapid pace of innovation in AI means new tools, risks, and edge cases will continue to emerge. That’s why the “maturity of these types of effort must match the speed of innovation happening within AI,” Valente says.
Editor's note: This article was originally published in April 2025 and has been updated.