2. AI at the Businesswide Level
AI chatbots that can interface with consumers and employees in dozens of languages represent an example of AI at the enterprise, or businesswide, level. Data policies are crucial to securely and responsibly implementing these larger-scale AI implementations.
“If you have very sound data policies that take into consideration privacy and security and access, you wouldn’t even perhaps need an AI policy because your existing data policy would govern it,” Shark says.
Before introducing AI that interfaces with larger data sets, businesses must assess their existing data and classify it in accordance with clear data policies. They must also evaluate how they will collect and classify future data to ensure ongoing, methodical data governance.
DIG DEEPER: How CIOs are taking critical steps towards artificial intelligence readiness.
“While we look back at what we already have, we have to also set up parameters for better data collection moving forward,” Shark says. “We’ve got to figure out, can we better classify it?”
“Your output is only as good as your data,” Hurt says. “And I’m seeing customers having to spend less time readying their data to be able to take advantage of AI, because ServiceNow and other companies have simplified data ingestion to be able to train models for business use cases.”
Hurt underscores the importance of asset management and observability as businesses attempt to get a clear picture of their tech stacks to understand limitations.
“Once they’ve got all of their assets identified — their hardware and their software — they ultimately have a really good view of their entire business,” he says. “From there, it’s easier to find and act on what has the most value and takes the least amount of time to solve.”
READ MORE: Data governance strategies are key to artificial intelligence success.
3. Open vs. Closed AI Systems in Hybrid Settings
ChatGPT, Perplexity, Gemini and Copilot are all examples of what Shark calls “open systems” that operate through a public domain.
“This is where you want to be incredibly careful to make sure employees know that there’s no personally identifiable information or anything harmful or outwardly discriminatory or biased,” Shark says.
Cloud-based large language models (LLMs) can be powerful tools, but they risk compromising sensitive data in the form of inputs and providing false or misleading information in outputs.
By contrast, closed AI systems are only available via specific domains, for specific users. Anything that involves private information or sensitive documentation is better served by a closed system that is only accessible to authorized users. Under a closed system, any and all data inputs will never become public.
Another way of framing this conversation is around public cloud versus on-premises or private cloud. Deciding what to host where is a crucial aspect of leveraging AI. According to Shark, “on-premises is coming back,” fueled by broadband constraints and the need for greater speed at the network’s edge, as well as a desire to keep some AI use cases more private and secure.
“People are starting to have second thoughts and saying, cloud is great for storage but some things are better on-premises,” Shark says.
This might include closed AI systems such as custom LLMs, but it can also include small language models and productivity use cases, which in the near future may be increasingly offloaded onto AI PCs.
CHECK OUT: CDW’s 2025 Artificial Intelligence Research Report surveys 900 IT decision-makers.
Training Is Crucial Every Step of the Way
“AI is a profound change, and we need profound training and education at every level,” Shark says. “The training part is to help people understand how it can best be used.”
From a user interface perspective, some vendors, such as ServiceNow, say they can consolidate AI functionality into a single pane of glass to minimize the amount of user training required.
“You can use your own language models with ServiceNow, you can use ours or you can use other language models in an interface that is already very familiar to so many businesses,” Hurt says. “To have it all under one umbrella like this really helps from a user training perspective.”
Like Hurt, Shark believes that the upfront work — data management, asset management, use-case identification and training — is well worth the potential outcomes, and that sitting on the sidelines is dangerous.
“AI can be very powerful with the right data sets in that it can identify within milliseconds patterns, trends and predictions faster than a human ever could,” he says. “For the first time, these systems might be able to find the needle in the haystack.”
Click the banner below to keep reading stories from our new publication, BizTech: Small Business.