Jan 26 2026
Security

Microsoft Supports Solutions to Help Businesses Secure Their AI Work

AI security is challenging for organizations subject to cybersecurity and privacy laws or regulations.

Many businesses are grappling with how to use artificial intelligence securely. There are major concerns regarding sensitive information, such as customer data being fed inadvertently into AI environments, unauthorized access to AI environments enabling data breaches and tampering with AI environments to alter outputs, among other risks.

AI security is even more challenging for organizations that are subject to additional cybersecurity and privacy laws or regulations. And businesses facing budget and staffing shortages may find that their ability to leverage AI technology in support of their missions is limited.

One option is to use existing cybersecurity tools to safeguard their AI use. For example, Microsoft offers a variety of solutions that can be used in combination to improve AI security. Organizations that already have access to these tools can evaluate them and determine how they might address AI-related cybersecurity and privacy risks.

Let’s take a closer look at several Microsoft tools that could help businesses secure their AI environments and AI use.

Click the banner below to read the 2024 CDW Cybersecurity Research Report.

 

Microsoft Copilot Already Has Access to Multiple AI Solutions

Businesses that use Microsoft endpoints, applications and cloud instances should already have access to multiple Microsoft AI solutions, such as Copilot.

Microsoft frames Copilot as an “AI-powered assistant” that can help individual employees perform their daily tasks. For example, businesses can use Microsoft 365 Copilot by itself, and they can add role-based agents or create their own agents designed to help in particular roles. As part of Microsoft 365, Copilot is already subject to all of Microsoft 365’s cybersecurity and privacy policies and requirements.

Using Azure AI and ML to Customize and Deploy Models

Microsoft offers a range of extensible AI solutions under its Azure AI brand. For organizations that want to create AI-powered apps, Microsoft provides the Azure AI Foundry toolkit. There are currently over 11,000 AI models available for use with AI Foundry, most developed by third parties.

The same AI models that are used with Azure AI Foundry are also available within Azure Machine Learning workspaces. Here, businesses can customize and deploy machine learning models.

Ensuring the security of internally developed AI apps or models, especially with such a wide variety of starting models to choose from, is bound to be a much larger undertaking than securing the internal use of a Copilot agent. It will require the use of several other tools.

RELATED: Bridge information gaps with CDW’s technology support services.

Azure AI Content Safety Enforces Agency Policies

Microsoft’s Azure AI Content Safety serves several purposes, such as blocking content that violates policies. One of the service’s features, Prompt Shields, is of particular interest for AI environment security. Prompt Shields can monitor all prompts and other inputs to Azure-based large language models and carefully analyze them to identify attacks and any other attempts to circumvent the model’s protections.

For example, Prompt Shields could identify someone attempting to steal sensitive information contained in an LLM or cause it to produce output that violates the organization’s policies. This could include using inappropriate language or directing the model to ignore its existing security and safety policies.

Groundedness Detection, another service offered as part of Azure AI Content Safety, essentially looks for AI-generated output that is not solidly based on reliable data. In other words, it can identify and stop some AI hallucinations.

Click the banner below to read the new CDW Artificial Intelligence Research Report.

 

Microsoft Defender Monitors and Maintains Azure Environments

Microsoft provides Defender for Cloud (formerly Azure Security Center) to assist businesses with monitoring and maintaining the security of their Azure environments. This includes any Azure workloads being used to develop or host an organization’s AI apps. Defender for Cloud can help safeguard AI apps by ensuring the platforms under them are patched and configured to eliminate known security vulnerabilities. Defender for Cloud also can identify the latest cyberthreats and detect and stop attacks against those platforms and the AI apps running on them. These are all important elements of safeguarding a district’s AI environments and usage.

Microsoft offers other forms of Defender, including Defender for Cloud Apps (formerly Microsoft Cloud App Security), which identifies cloud app use and reports how risky each app is. This information can be useful in finding unauthorized uses of third-party AI apps and services. Defender for Cloud Apps is also capable of monitoring your employees’ Copilot use for suspicious activity.

Microsoft’s Defender for Endpoint and Defender for Servers provide additional security protection for other components of your AI environments outside of Azure, such as developer and user workstations and servers.

EXPLORE: What is data poisoning, and how can you protect against it?

Ensure Data Governance Is Within Your Purview

Microsoft Purview is a suite of tools and services that work together to help businesses with data governance, management and protection. Existing Purview components, such as Compliance Manager, have been enhanced to include assessments of compliance with certain AI regulations.

Components specific to AI have also been added to Purview. The Purview AI Hub can help you monitor and identify sensitive data in AI prompts, particularly with Copilot use. The AI Hub also monitors which files are accessed through Copilot to look for attempts to access sensitive data in files. The intent of the AI Hub is to ensure compliance with policies and requirements by identifying possible violations as they are occurring. 

 

gremlin/Getty Images
Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.