SLMs: Fewer Parameters, Many Applications
SLMs are spinoffs of LLMs, which have gained massive attention since the introduction of ChatGPT in late 2022. Drawing on the power of LLMs, ChatGPT depends on specially designed microchips called graphic processing units (GPUs) to mimic human communication. The models ingest immense volumes of text, sounds and visual data and train themselves to learn from hundreds of billions or even trillions of variables, called parameters, according to IBM.
SLMs, by contrast, use substantially fewer parameters — from a few million to a billion. They can’t do everything an LLM can do, but their small size pays off in specific scenarios.
Unlike the LLMs behind tools like ChatGPT, SLMs are lightweight and efficient — designed to reduce overhead and run seamlessly on edge devices such as smartphones and sensors.
The potential applications are vast across a variety of sectors: finance, pharmaceuticals, manufacturing and insurance. “All of those operationally focused areas present places where these models can be specifically targeted to domains,” says Sidney Fernandes, CIO and vice president of digital experiences at the University of South Florida.
DISCOVER: CDW helps organizations improve data governance with large language models.
SLMs Are Cheaper, Less Risky and More Sustainable to Operate
Smaller models cost less to operate, which the world noticed with the arrival of DeepSeek, a small, open-AI model from China. DeepSeek’s potential reduction in AI costs triggered a temporary sell-off in global financial markets, as investors feared it might challenge the dominance of NVIDIA, the global leader for GPU chips.
Lower AI costs are a welcome change for small and midsize businesses, according to Jenay Robert, an EDUCAUSE senior researcher and co-author of the 2025 EDUCAUSE AI Landscape Study. According to the study, only 2% of respondents said they had enough funding for AI.
“Institutions are likely trying to fund AI-related expenses by diverting or reallocating existing budgets, so SLMs can provide an opportunity to decrease those costs and minimize budgetary impacts,” she says.
SLMs can also help with the data governance issues that LLMs create. Organizations worry about data protection, privacy breaches, compliance demands and potential copyright or intellectual property infractions with LLMs, Robert says.
“Institutions are more likely to run small language models on-premises, reducing risks related to data protection,” Robert adds. The study notes that many leaders prefer on-premises AI implementations in “walled gardens” that answer data governance challenges.
Click the banner below to keep reading stories from our new publication BizTech: Small Business.