Microsoft Copilot Already Has Access to Multiple AI Solutions
Businesses that use Microsoft endpoints, applications and cloud instances should already have access to multiple Microsoft AI solutions, such as Copilot.
Microsoft frames Copilot as an “AI-powered assistant” that can help individual employees perform their daily tasks. For example, businesses can use Microsoft 365 Copilot by itself, and they can add role-based agents or create their own agents designed to help in particular roles. As part of Microsoft 365, Copilot is already subject to all of Microsoft 365’s cybersecurity and privacy policies and requirements.
Using Azure AI and ML to Customize and Deploy Models
Microsoft offers a range of extensible AI solutions under its Azure AI brand. For organizations that want to create AI-powered apps, Microsoft provides the Azure AI Foundry toolkit. There are currently over 11,000 AI models available for use with AI Foundry, most developed by third parties.
The same AI models that are used with Azure AI Foundry are also available within Azure Machine Learning workspaces. Here, businesses can customize and deploy machine learning models.
Ensuring the security of internally developed AI apps or models, especially with such a wide variety of starting models to choose from, is bound to be a much larger undertaking than securing the internal use of a Copilot agent. It will require the use of several other tools.
RELATED: Bridge information gaps with CDW’s technology support services.
Azure AI Content Safety Enforces Agency Policies
Microsoft’s Azure AI Content Safety serves several purposes, such as blocking content that violates policies. One of the service’s features, Prompt Shields, is of particular interest for AI environment security. Prompt Shields can monitor all prompts and other inputs to Azure-based large language models and carefully analyze them to identify attacks and any other attempts to circumvent the model’s protections.
For example, Prompt Shields could identify someone attempting to steal sensitive information contained in an LLM or cause it to produce output that violates the organization’s policies. This could include using inappropriate language or directing the model to ignore its existing security and safety policies.
Groundedness Detection, another service offered as part of Azure AI Content Safety, essentially looks for AI-generated output that is not solidly based on reliable data. In other words, it can identify and stop some AI hallucinations.
Click the banner below to read the new CDW Artificial Intelligence Research Report.
