Oct 22 2025
Data Analytics

How Enterprises Can Build a GenAI Strategy for Long- and Short-Term Business Goals

Organizations must balance quick wins with long-term strategy, starting with high-impact use cases and off-the-shelf tools and then integrating AI into core business processes.

Enterprises experimenting with generative AI face a challenge in moving from pilots to measurable, enterprisewide impact as they look to balance immediate wins with a long-term roadmap.

In the short term, IT and business leaders are advised to start small by automating high-impact internal tasks — such as document summarization, drafting emails or enhancing customer support — using pretrained large language models.

Off-the-shelf tools like Microsoft Copilot, Adobe Firefly and Salesforce Einstein can deliver quick value without the heavy lift of custom development.

At the same time, organizations must set responsible-use policies early, including security guardrails and risk thresholds, to safely manage experimentation.

Click the banner below to lay the data governance foundation needed for AI.

 

For sustained success, enterprises need to lay a foundation for scale. That includes investing in robust data and modeling infrastructure through platforms like Amazon Web Services Bedrock, Microsoft Azure OpenAI or NVIDIA DGX, supported by partners such as CDW.

Long-term integration means connecting GenAI to core systems such as enterprise resource planning (ERP) and customer relationship management (CRM) to transform supply chain forecasting, product development and other processes.

Governance will also be essential, with cross-functional teams spanning IT, compliance, legal and business units to ensure transparency, accountability and ethical adoption.

Short-Term Best Practices When Navigating AI

Phil Carter, IDC general manager and group vice president for AI, data and automation research, recommends identifying high-impact use cases based on three broad categories.

These include personal productivity (such as Copilot and other AI assistants), functional use cases (such as contact centers, IT or software development) and industry-specific applications such as drug discovery in life sciences.

Once those categories are clear, Carter stresses the importance of balancing business value, cost and risk, noting traditional ROI analysis is no longer enough.

“With GenAI, the added dimension is risk — and that covers everything from governance and compliance to the potential for things to go wrong, as we’ve seen in early pilots,” he says.

Safeguards are critical, especially for organizations deploying off-the-shelf tools such as Copilot or Einstein.

DIVE DEEPER: Why is data governance not just a tech issue?

Carter says many companies began with AI-specific policies, but the more mature players have gone further.

“There is a clear correlation between AI pioneers and their focus on governance,” he explains. “The more mature, the more the levels of governance are well thought through, documented, and practiced.”

The best practice, Carter notes, is to embed governance into the process from the very beginning — use case selection, team design and proof of concept development — so that risk awareness becomes part of the business model.

“Those people need to come in, not with the mindset of ‘We can’t do this,’ but with ‘We could do it, and here are the risks,’” he says. “Everyone must be very well versed on the risks and how to mitigate them.”

By starting with carefully selected use cases, aligning risk tolerance with business goals, and embedding governance early, enterprises can capture short-term wins while laying the groundwork for long-term GenAI innovation.

Phil Carter
There is a clear correlation between AI pioneers and their focus on governance.”

Phil Carter General Manager and Group Vice President for AI, Data and Automation Research, IDC

Reviewing AI Long-Term Best Practices

As their capabilities with GenAI evolve, enterprises must shift from short-term pilots to long-term strategies that deliver sustained transformation.

Carter says the foundation of that future lies in what he calls an “AI-ready tech stack.”

“You need an enterprisewide AI strategy with the right prioritized use cases, an AI-ready workforce and a technology stack that can scale,” he explains.

That stack has three key sub-elements: It’s agentic-ready, has clean and organized data, and has an infrastructure that avoids bottlenecks at the network, server and storage level.

Carter describes the ideal future state as an “AI factory” — a repeatable system for producing AI use cases at scale.

“You want it to be repeatable, but also to drive acceleration using agentic capabilities,” he says. “It’s about mass-producing models, insights and systems.”

Click the banner below to read the 2025 CDW Artificial Intelligence Research Report.

 

Integration into core business systems like ERP and CRM is another challenge. According to Carter, CIOs are weighing whether to stick with traditional vendors or rethink enterprise software through an AI-first lens.

“You don’t want to be putting together a $100 million ERP transformation and be worried that in three years, you will realize it could have been done in half the time and cost with AI agents,” he says.

Finally, Carter stresses that governance is central and must be cross-functional.

“AI governance should not sit in IT,” he says. “It needs to sit under the business — ideally under the CEO — with contributions from legal, compliance, HR, security and IT.”

By investing in scalable infrastructure, embedding GenAI into business processes and treating governance as an enterprisewide mandate, Carter says, organizations can build a strategy that balances innovation with accountability.

“It must be owned by every employee, just like information security became a shared responsibility,” he says.

PonyWang/Getty Images
Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.