Apr 18 2025
Security

What Is Shadow AI, and How Does It Impact AI Compliance?

Ensuring AI compliance is a continuous challenge, especially with the rise of shadow AI. But a few assurance checks can help.

Shadow AI is the use of artificial intelligence tools and systems deployed within an organization without the knowledge, approval or oversight of the IT or security departments. Think of an employee in a marketing department using ChatGPT without an enterprise license, or a software developer using their own Anthropic account. These tools are often adopted by individual employees or teams to boost productivity, but they can create security, compliance and data governance risks if left unmanaged.

FIND OUT: What are some key security considerations for embracing AI?

For Alexander Johnston, senior research analyst at 451 Research, not having adequate oversight of the AI capabilities being used at an organization may significantly impact data privacy. “While most enterprise services provide assurances as to where data is stored and how it is used, this is not always the case with the free online services that many users engage with,” Johnston says.

He suggests organizations bind AI use to specific processes, where quality assurance and security is adequately considered.

Alexander Johnston
While most enterprise services provide assurances as to where data is stored and how it is used, this is not always the case with the free online services that many users engage with.”

Alexander Johnston Senior Research Analyst, 451 Research

What Is AI Compliance?

“AI compliance is the same as other forms of regulatory compliance,” says Forrester Senior Analyst Alla Valente. It ensures that AI systems meet key regulations.

Companies operating in or doing business with regions like the EU must comply with regulations such as the EU AI Act, while in the U.S., a patchwork of state-level laws in places like Colorado and New York apply. “Companies that are operating in those states or have customers or employees in those states have to make sure that the AI they’re leveraging is compliant with those laws and regulations,” Valente says.

Johnston says that the “compliance umbrella also commonly accounts for ethical guidelines.” And businesses need to honor those too to avoid “legal and reputational risks.”

RELATED: How to manage compliance and risk in a modern landscape.

Why Is AI Compliance Important?

Businesses that fail to meet regulatory or internal requirements face serious financial and reputational issues.  “If you’re not compliant, there are consequences that involve fines, fees penalties,” Valente says.

Not to mention lawsuits from state attorneys general, external audits and unfavorable media coverage. “Especially if it’s egregious noncompliance, you make headlines — and those are not favorable headlines,” Valente adds.

As AI becomes a growing focus of regulation, companies must extend their compliance frameworks to account for emerging laws at the state, federal and international levels.

Click the banner below to read the 2025 CDW AI Research Report.

 

How Can Businesses Reduce These Risks and Succeed With AI?

Ensuring AI compliance requires collaboration across departments. Legal teams verify that new initiatives don’t violate applicable laws, while risk, compliance and security professionals help assess technical safeguards.

The more that AI tools are embedded in day-to-day operations, the more businesses need to set clear policies for internal and third-party use. “Companies are now starting to harmonize their existing data use policies with specific AI use cases,” Valente says.

Organizations need to establish guardrails, like those that IT leaders set up for bring-your-own-device initiatives that allowed employees to use personal tech for work. These AI policies should outline which generative AI tools employees can use and what kinds of data are permitted to avoid accidental disclosures of confidential or sensitive information.

Businesses must also share such acceptable-use policies with their vendors, suppliers and partners. “If they are using generative AI to support you, chances are they are also somehow processing your data,” Valente says.

Alla Valente
Even if an organization takes steps to try to identify and mitigate shadow AI inside of its organizations, this is not going to be a one-and-done endeavor.”

Alla Valente Senior Analyst, Forrester

Contracts are now being updated with AI-specific language to define what vendors can and cannot do with generative AI, and what liabilities apply if policies are breached. “If X and Y happen with generative AI outside of this requirement, then there’s a certain liability,” she explains.

Procurement teams are another critical gatekeeper in managing AI compliance. “There’s a series of steps or compliance checks they have to do,” Valente says.  These steps often include validating whether a vendor is on a sanctions list, confirming financial viability, and ensuring legal and regulatory alignment.

And as with any security effort, it’s a moving target. “Even if an organization takes steps to try to identify and mitigate shadow AI inside of its organizations, this is not going to be a one-and-done endeavor,” she says.

The rapid pace of innovation in AI means new tools, risks, and edge cases will continue to emerge.  That’s why the “maturity of these types of effort must match the speed of innovation happening within AI,” Valente says.

UP NEXT: Agentic AI is revolutionizing business and everyday life.

StudioM1/ Getty Images
Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.