An even bigger challenge is cost. If you were among those who thought a cloud migration would mean big savings, only to wince when you saw those first monthly bills roll in, then you can imagine what those bills would be like when you need the kind of cloud-based computing power necessary for AI.
That said, smaller-scope AI projects can often be run in the cloud. A bank we worked with recently, for example, deployed a new, AI-powered, cloud-hosted call center and is perfectly happy with the results.
DISCOVER: Learn how to deploy AI and analytic solutions to advance your business.
Why AI Requires GPUs
Bigger projects, though, will likely require on-premises GPU hardware. This can be acquired as part of traditional data center technology, with clusters of GPUs included in the box alongside conventional CPUs or as external GPU clusters that come complete with attached network and storage (just like a hyperconverged infrastructure setup), essentially taking the place of a traditional data center.
Purchasing an external GPU cluster is perhaps the best way an organization can acquire the most elite level of processing power, and it is best suited for those that have large needs right now or ambitious projects planned. These organizations have a few options in terms of how they buy such clusters, which come in groups of eight: A business that needs more can tie two or more clusters together or buy a “superpod,” which includes 256 GPUs.
This is no small investment, but these clusters come with the high-performance networking and low latency that organizations need for significant AI-based projects, as well as the software necessary to compute for AI.
Few organizations have a clear view of the best approach right from the beginning. Typically, the path forward emerges from a series of conversations with a partner that has seen many of these projects from start to finish.
CDW is here for you when you’re ready to have that conversation.