Retailers aren’t just experimenting with AI chatbots anymore — they’re giving algorithms real agency. About 43% of retailers are already piloting autonomous, or agentic, AI, and another 53% are evaluating use cases, according to Salesforce’s Connected Shoppers Report. Automation is moving from behind-the-scenes analytics to frontline decision-making.
Yet the industry is learning a hard truth: Broad, general‑purpose AI models don’t understand retail well enough to reliably support decisions.
Large language models (LLMs) are built to be generalists. They ingest broad swaths of public text and produce equally broad responses. That flexibility is impressive, but retail and CPG environments rely heavily on numerical models, optimization engines, and tightly structured product and pricing data. When an LLM tries to operate in that world, it’s speaking the wrong language.
Small language models (SLMs) step into that gap with a very different goal: precision. Instead of trying to know a little about everything, SLMs are designed to know a lot about one specific thing. They can be trained on a retailer’s catalog data, product hierarchies, store policies, packaging details and loyalty insights. Because they learn from verified internal sources rather than public text, they deliver answers grounded in operational truth and are less susceptible to hallucinations.
How Domain-Specific Models Work in Practice
Domain-specific SLMs thrive in areas where text, rules and product knowledge collide — the parts of retail that depend on accuracy, consistency and context rather than creativity.
In marketing, SLMs can generate precise catalog descriptions, create campaign variants and support B2B sales teams with product‑accurate storytelling — all grounded in verified item data.
Ecommerce teams can use SLMs to extract attributes from packaging images or PDFs, enrich product pages, strengthen recommendations with SKU‑level understanding and trigger subtle shopper “nudges” based on real product relationships.
SLMs also have a role in customer support, because they can interpret warranty manuals, automate returns workflows and summarize supplier or customer feedback in consistent operational language.
Because these models are trained on a retailer’s internal data, they respond with accuracy and consistency. That reliability becomes even more valuable when SLMs run on the edge, inside stores or on handhelds, where instant, trusted answers can streamline associate tasks and reduce friction for shoppers.
Building Your Own Retail-Specific Model
The old build-or-buy debate has a clear answer when it comes to SLMs: It’s always better to make use of your own data rather than fine-tune a model with someone else’s data. Building a model that reflects your business exactly as it operates has a big payoff, because you know you can trust the outputs of the model. It leads to better decision-making and efficiencies that translate into ROI.
Here is a step-by-step approach to building a model that will work for you.
1. Audit your proprietary data. Catalogs, attributes, loyalty data, supplier files, planograms, support logs — these are the raw materials an SLM needs.
2. Identify a function-specific use case. Start where accuracy creates immediate value, such as content creation, attribute extraction, customer‑support automation or recommendation quality.
3. Train and validate with your data. Off‑the‑shelf retail datasets won’t produce reliable results. The model must learn your taxonomy, your product language and your rules.
4. Run controlled experiments. Test with clear baseline metrics: accuracy improvements, time saved, error reduction or uplift in customer experience.
5. Scale with intention. Expand only after the model proves itself in narrow, high‑impact scenarios. Retailers that scale prematurely often spend more time correcting errors than capturing value.
The retailers that will see real progress won’t bet on the biggest AI model. They’ll bet on the most relevant one — a model trained on their own data, their own rules and their own operational realities. We’ve been trained to think that bigger means better, but smaller models aren’t a limitation. They’re a competitive advantage when they’re trained to think exactly the way you need.
Arun Kumar is Global Head of Product and Industry Practice at Altimetrik, working across the retail, supply chain, manufacturing and technology sectors. He is responsible for the vision, design, and delivery of product and platform offerings.