May 19 2025
Artificial Intelligence

How Small Language Models Drive Business Efficiency

Small language models require fewer resources than their larger counterparts and are more sustainable. Here’s why.

Small language models, known as SLMs, create intriguing possibilities for business leaders looking to take advantage of artificial intelligence and machine learning. 

SLMs are miniature versions of the large language models that spawned ChatGPT and other flavors of generative AI. For example, if an LLM were a desktop workstation, complete with monitor, keyboard, CPU and mouse, an SLM would be comparable to a smartwatch: The watch has a sliver of the computing muscle of the PC, but you wouldn’t strap a PC to your wrist to monitor your heart rate while jogging.

IT leaders are finding that SLMs have myriad benefits. They can power chatbots and virtual assistants to handle customer inquiries without expensive infrastructure. SLMs can automate internal tasks such as drafting emails or summarizing documents, freeing up employee time. Teams can also fine-tune SLMs on their specific data to improve customer engagement without massive AI budgets.

Overall, SLMs deliver practical AI benefits scaled to small business needs, making innovation more accessible and cost-effective. Here’s what you need to know:

Click the banner below to learn how artificial intelligence can transform your business.

 

SLMs: Fewer Parameters, Many Applications

SLMs are spinoffs of LLMs, which have gained massive attention since the introduction of ChatGPT in late 2022. Drawing on the power of LLMs, ChatGPT depends on specially designed microchips called graphic processing units (GPUs) to mimic human communication. The models ingest immense volumes of text, sounds and visual data and train themselves to learn from hundreds of billions or even trillions of variables, called parameters, according to IBM

SLMs, by contrast, use substantially fewer parameters — from a few million to a billion. They can’t do everything an LLM can do, but their small size pays off in specific scenarios.

Unlike the LLMs behind tools like ChatGPT, SLMs are lightweight and efficient — designed to reduce overhead and run seamlessly on edge devices such as smartphones and sensors.

The potential applications are vast across a variety of sectors: finance, pharmaceuticals, manufacturing and insurance. “All of those operationally focused areas present places where these models can be specifically targeted to domains,” says Sidney Fernandes, CIO and vice president of digital experiences at the University of South Florida  

DISCOVER: CDW helps organizations improve data governance with large language models. 

SLMs Are Cheaper, Less Risky and More Sustainable to Operate

Smaller models cost less to operate, which the world noticed with the arrival of DeepSeek, a small, open-AI model from China. DeepSeek’s potential reduction in AI costs triggered a temporary sell-off in global financial markets, as investors feared it might challenge the dominance of NVIDIA, the global leader for GPU chips.  

Lower AI costs are a welcome change for small and midsize businesses, according to Jenay Robert, an EDUCAUSE senior researcher and co-author of the 2025 EDUCAUSE AI Landscape Study. According to the study, only 2% of respondents said they had enough funding for AI.

“Institutions are likely trying to fund AI-related expenses by diverting or reallocating existing budgets, so SLMs can provide an opportunity to decrease those costs and minimize budgetary impacts,” she says.

SLMs can also help with the data governance issues that LLMs create. Organizations worry about data protection, privacy breaches, compliance demands and potential copyright or intellectual property infractions with LLMs, Robert says.

“Institutions are more likely to run small language models on-premises, reducing risks related to data protection,” Robert adds. The study notes that many leaders prefer on-premises AI implementations in “walled gardens” that answer data governance challenges.

Click the banner below to keep reading stories from our new publication BizTech: Small Business.

 

Privacy and Training Benefits for Businesses

SLMs are also a game changer because they can connect more easily to edge devices such as smartphones, cameras, sensors and laptops, said USF’s Fernandes. Adding AI chips to devices helps with inference (the process computers use to infer the meaning of users’ requests).

Edge devices are also safer from a privacy perspective because the data can be hosted in remote locations that are difficult for troublemakers to access. “If you install it locally, you can potentially have more sensitive data that is domain-specific,” Fernandes says.

Domain-specific SLMs can be installed by companies and target individual departments, and they can be trained for upskilling tasks. In operations, SLMs could be trained on systems to provide predictive maintenance, helping managers replace aging components or machinery to avert far more costly breakdowns. 

For all their benefits, SLMs still require solid data governance to ensure high-quality results. Smaller models must be carefully fine-tuned and monitored to reduce the risk of hallucinations and biased or offensive outputs. “Understanding the benefits as well as the shortcomings of those models is going to be very, very critical,” Fernandes says.

CHECK OUT: What are IT leaders saying about the 2025 AI landscape?

Sophisticated SLMs Will Expand Their Capabilities

SLMs run on model compression or model distillation; the larger model trains the smaller.

Because they can be trained on specific domains, SLMs open more opportunities for autonomous agents that operate in the background, doing everyday tasks for employees. “As you have more model compression, you're going to have IoT devices with SLMs built into them that could act as agents themselves,” Fernandes says. Eventually, the agents could become smart enough that they might talk to each other, saving even more human labor.

As the models get smaller, they’ll retain considerable computing power. “The SLMs of tomorrow will probably do what the LLMs of today are doing,” Fernandes says. These much smaller, more efficient models can be installed directly on edge devices.

“They'll come with built-in capabilities, and there will be vendors taking advantage of those,” Fernandes adds. “Which means how you manage your edge devices is going to be even more critical than it was before.”

UP NEXT: You're probably not using these AI tools for collaboration.

BlackJack3D/Getty Images
Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.