May 07 2026
Artificial Intelligence

Optimizing Financial Services IT Infrastructure for AI

Data governance, risk controls, observability and workforce training are critical priorities for financial institutions looking to scale artificial intelligence securely and compliantly.

As financial services organizations accelerate artificial intelligence adoption, IT leaders face two defining questions: Which use cases will deliver measurable business and customer value? And is their infrastructure prepared to support AI within a highly regulated environment?

“A tremendous amount can be done with AI to improve services and empower employees, and you shouldn’t be sitting on the sidelines,” says Mike Hurt, group vice president for public sector at ServiceNow. “But I also think a lot of vendors are confusing decision-makers on what they can do with AI and how they should do it.”

For banks, insurers, capital markets firms and fintech organizations, preparation is not just about innovation — it’s about secure, compliant and resilient execution. With sensitive financial data, strict regulatory oversight and growing cyber risk, AI readiness requires a disciplined, strategic approach.

Click the banner below to learn how organizations are unlocking artificial intelligence’s potential.

 

Core Pillars of AI Readiness in Financial Services

There are three primary considerations when evaluating IT environments for AI readiness, says Alan Shark, executive director of the Public Technology Institute. Each of these carries heightened importance in financial services.

1. AI for the Individual: Enhancing Workforce Productivity While Controlling Risk

AI adoption in financial institutions often begins at the employee level. Generative AI tools can help staff draft reports, analyze financial data, summarize client interactions and automate compliance documentation.

“How do we use AI to improve an employee’s productivity and creativity, their ability to better communicate, write better reports, make better presentations and the like?” Shark asks.

However, in financial services, uncontrolled adoption introduces significant risk, including:

Rather than allowing open experimentation, institutions should implement:

  • Controlled AI sandboxes for testing tools
  • Centralized AI governance and approval processes
  • Role-based access to AI capabilities
  • Monitoring of prompt and output activity

Standardized endpoint strategies also play a key role. AI-enabled devices can be tiered for different roles — from call center staff to quantitative analysts — ensuring compute power aligns with workload while maintaining security controls.

FIND OUT: Windows 11 can help with secure device management.

2. Enterprise AI: Data Governance, Compliance and Observability

AI becomes transformative in financial services when deployed across high-value use cases such as:

  • Fraud detection and prevention
  • Risk modeling and stress-testing
  • Customer service automation
  • Algorithmic trading insights
  • Regulatory reporting automation

At this level, data governance is nonnegotiable.

“If you have very sound data policies that take into consideration privacy and security and access, you perhaps wouldn’t even need an AI policy because your existing data policy would govern it,” Shark says.  

For financial institutions, this translates to:

  • Strict data classification (PII, PCI, financial records)
  • Data lineage and auditability
  • Defined ownership and stewardship
  • Regulatory compliance alignment
  • Continuous lifecycle management

Before deploying AI models, IT leaders must evaluate whether their data is:

“Your output is only as good as your data,” Hurt says.

Equally critical is observability across complex financial IT environments, including:

Hurt emphasizes the importance of asset visibility:

“Once they’ve got all of their assets identified, their hardware and their software, they ultimately have a really good view of their entire enterprise.”

For financial services IT teams, this means:

With this foundation, organizations can prioritize AI use cases that deliver ROI without introducing operational or compliance risk.

Alan Shark headshot
If you have very sound data policies that take into consideration privacy and security and access, you perhaps wouldn’t even need an AI policy because your existing data policy would govern it.”

Alan Shark Executive Director, Public Technology Institute

3. Open vs. Closed AI Systems in Regulated Financial Environments

Public generative AI tools — such as ChatGPT, Google Gemini and Microsoft Copilot — represent what Shark calls “open systems.”

“This is where you want to be incredibly careful to make sure employees know that there’s no personally identifiable information or anything harmful or outwardly discriminatory or biased,” Shark says.

For financial institutions, the risks are amplified:

  • Exposure of client financial data
  • Regulatory violations
  • Model hallucinations impacting decisions
  • Reputational damage

As a result, many organizations are prioritizing closed AI systems, including:

  • Private large language models
  • Domain-specific financial AI models
  • On-premises or private cloud deployments

This shift reflects a broader reassessment of infrastructure strategy.

“People are starting to have second thoughts and saying, cloud is great for storage, but some things are better on-premises,” Shark says.

For financial services, the focus is on intentional workload placement:

  • Public cloud for scalable, nonsensitive workloads
  • Private cloud for controlled financial applications
  • On-premises for latency-sensitive trading systems and regulated data
  • Edge for real-time fraud detection and transaction processing

DISCOVER: See why hybrid has become the default choice for infrastructure and data storage.

Training Is Essential for Responsible AI Adoption

Technology alone does not drive transformation — especially in financial services, where compliance and trust are paramount.

“AI is a profound change, and we need profound training and education at every level,” Shark says.

Key training priorities include:

  • Executive-level AI and risk literacy
  • Regulatory and compliance training
  • Developer governance and model oversight
  • End-user education on responsible AI usage

Embedding AI into familiar tools can ease adoption while maintaining control.

“You can use your own language models with ServiceNow, you can use ours, or you can use other language models in an interface that is already very familiar to so many organizations,” Hurt says.

Ultimately, the foundational work — governance, observability, infrastructure modernization and workforce readiness — enables financial institutions to unlock AI safely.

“AI can be very powerful with the right data sets because it can identify — within milliseconds — patterns, trends and predictions faster than a human ever could,” Shark says.

For financial services IT leaders, the risk is not adopting AI too quickly — it’s falling behind competitors who are building secure, compliant AI foundations today.

Alex Cristi/Getty Images
Close

New Research from CDW on Workplace Friction

Learn how IT leaders are working to build a frictionless enterprise.