Jun 20 2025
Management

Preparing Your IT Infrastructure for AI

Asset management, observability, training and careful artificial intelligence use case selection should be top priorities for small businesses and startups, experts say.

As small businesses adopt artificial intelligence technology, they face fundamental challenges about what use cases are most valuable — and, crucially, what type of IT infrastructure best supports those use cases.

“A tremendous amount can be done with AI to empower employees, and you shouldn’t be sitting on the sidelines,” says Mike Hurt, vice president of SLED at ServiceNow. “But I also think a lot of vendors are confusing decision-makers on what they can do with AI and how they should do it.”

While there is no one-size-fits-all roadmap to prepare for AI adoption, experts have identified several key mile markers that can help SMBs and startups prepare their infrastructure to best capitalize on AI in the years ahead.

Click the banner below to learn how to build artificial intelligence ready infrastructure.

 

Core Pillars of AI Readiness

There are three primary considerations when evaluating your IT environment for AI readiness, says Public Technology Institute Executive Director Alan Shark.

1. AI for the Individual

“How do we use AI to improve an employee’s productivity and creativity, their ability to better communicate, write better reports, make better presentations and the like?” Shark asks. “The problem is that there is no one product that does it all.”

Software developers are flooding the market with AI-enabled products that solve very specific problems, which has left many businesses wondering where their money is best spent. There are a few potential ways to deal with this, Shark says.

“Set up AI productivity centers,” he says. “These are dedicated workstations, physical or remote, that let employees access AI tools without needing individual licenses.”

An experimental environment could be a way for employees to work with the technology in a secure way without committing to large-scale licensing.

Workstation configurations are another key consideration when using AI at the individual level, Shark says, especially as AI PCs become more popular.

“This may be like the old days, when you were issued certain configurations,” Shark says. “You had maybe three desktop or laptop configurations, maybe four. One would be the light user, one the medium user, one the heavy user and one the custom user for the most specialized cases.”

Alan Shark
People are starting to have second thoughts and saying, cloud is great for storage but some things are better on-premises.”

Alan Shark Executive Director, Public Technology Institute

2. AI at the Businesswide Level

AI chatbots that can interface with consumers and employees in dozens of languages represent an example of AI at the enterprise, or businesswide, level. Data policies are crucial to securely and responsibly implementing these larger-scale AI implementations.

“If you have very sound data policies that take into consideration privacy and security and access, you wouldn’t even perhaps need an AI policy because your existing data policy would govern it,” Shark says.

Before introducing AI that interfaces with larger data sets, businesses must assess their existing data and classify it in accordance with clear data policies. They must also evaluate how they will collect and classify future data to ensure ongoing, methodical data governance.

DIG DEEPER: How CIOs are taking critical steps towards artificial intelligence readiness.

“While we look back at what we already have, we have to also set up parameters for better data collection moving forward,” Shark says. “We’ve got to figure out, can we better classify it?”

“Your output is only as good as your data,” Hurt says. “And I’m seeing customers having to spend less time readying their data to be able to take advantage of AI, because ServiceNow and other companies have simplified data ingestion to be able to train models for business use cases.”

Hurt underscores the importance of asset management and observability as businesses attempt to get a clear picture of their tech stacks to understand limitations.

“Once they’ve got all of their assets identified — their hardware and their software — they ultimately have a really good view of their entire business,” he says. “From there, it’s easier to find and act on what has the most value and takes the least amount of time to solve.”

READ MORE: Data governance strategies are key to artificial intelligence success.

3. Open vs. Closed AI Systems in Hybrid Settings

ChatGPT, Perplexity, Gemini and Copilot are all examples of what Shark calls “open systems” that operate through a public domain.

“This is where you want to be incredibly careful to make sure employees know that there’s no personally identifiable information or anything harmful or outwardly discriminatory or biased,” Shark says.

Cloud-based large language models (LLMs) can be powerful tools, but they risk compromising sensitive data in the form of inputs and providing false or misleading information in outputs.

By contrast, closed AI systems are only available via specific domains, for specific users. Anything that involves private information or sensitive documentation is better served by a closed system that is only accessible to authorized users. Under a closed system, any and all data inputs will never become public.

Another way of framing this conversation is around public cloud versus on-premises or private cloud. Deciding what to host where is a crucial aspect of leveraging AI. According to Shark, “on-premises is coming back,” fueled by broadband constraints and the need for greater speed at the network’s edge, as well as a desire to keep some AI use cases more private and secure.

“People are starting to have second thoughts and saying, cloud is great for storage but some things are better on-premises,” Shark says.

This might include closed AI systems such as custom LLMs, but it can also include small language models and productivity use cases, which in the near future may be increasingly offloaded onto AI PCs.

CHECK OUT: CDW’s 2025 Artificial Intelligence Research Report surveys 900 IT decision-makers.

Training Is Crucial Every Step of the Way

“AI is a profound change, and we need profound training and education at every level,” Shark says. “The training part is to help people understand how it can best be used.”

From a user interface perspective, some vendors, such as ServiceNow, say they can consolidate AI functionality into a single pane of glass to minimize the amount of user training required.

“You can use your own language models with ServiceNow, you can use ours or you can use other language models in an interface that is already very familiar to so many businesses,” Hurt says. “To have it all under one umbrella like this really helps from a user training perspective.”

Like Hurt, Shark believes that the upfront work — data management, asset management, use-case identification and training — is well worth the potential outcomes, and that sitting on the sidelines is dangerous.

AI can be very powerful with the right data sets in that it can identify within milliseconds patterns, trends and predictions faster than a human ever could,” he says. “For the first time, these systems might be able to find the needle in the haystack.”

Click the banner below to keep reading stories from our new publication, BizTech: Small Business.

supersizer/Getty Images
Close

Unlock IT Success for Your Small Business

Click here to sign up for our newsletter and get the latest expert insights.