Every procurement professional has been in this position: you're heading into a supplier negotiation, and the best cost benchmark you have is last year's contract price plus whatever market intelligence your team could pull together in a few days. You know the supplier's quote is probably inflated, but you can't prove it. You lack an independent baseline that breaks the cost down into its components and tells you what the price should be.
That's the problem should-cost modeling was designed to solve. And it's also the problem that most procurement teams still struggle to execute at scale.
Key takeaways:
- Should-cost modeling gives procurement teams an independent, data-backed estimate of what a product or service should cost based on its component inputs — materials, labor, overhead, logistics, and margin. It's the single most powerful tool for entering negotiations with leverage instead of guesswork.
- Traditional should-cost models are static spreadsheets built by analysts over weeks or months. They go stale the moment commodity prices shift, supplier economics change, or new tariffs take effect. Most organizations only build them for their top 10–20 categories, leaving the vast majority of spend unnegotiated.
- Suplari, rated 4.8/5 stars on Gartner Peer Insights, uses AI agents and a unified procurement data foundation to generate should-cost baselines continuously — detecting pricing anomalies across your entire spend portfolio, not just the categories you had time to model manually.
- Organizations that move from periodic, manual should-cost exercises to AI-driven continuous price intelligence don't just get better cost estimates — they get procurement teams that negotiate from a position of data-backed confidence across every category, every time.
What should-cost modeling actually is
At its core, a should-cost model is a bottom-up estimate of what a product or service should cost to produce and deliver. Rather than accepting a supplier's quoted price at face value, procurement teams decompose the cost into its fundamental drivers — raw materials, labor, manufacturing overhead, logistics, packaging, and supplier margin — and estimate each component independently.
The result is an informed baseline that tells you: "Based on current input costs, market conditions, and reasonable margin assumptions, this item should cost approximately X." When the supplier's quote comes in at 1.4X, you have specific, defensible evidence for where the gap lies and what a fair price looks like.
This isn't a new concept. Should-cost analysis has been a staple of strategic sourcing for decades, particularly in manufacturing-heavy industries like automotive and aerospace where bills of materials are well-understood. McKinsey has called it "the power of the parameter" — the idea that once you understand the true cost drivers, you can negotiate with precision instead of intuition.
But there's a gap between the theory and the reality of how most organizations actually practice should-cost modeling.
Why most should-cost models fail in practice
The traditional approach to should-cost modeling is labor-intensive, time-consuming, and fundamentally unscalable.
A typical should-cost exercise starts with a category manager or sourcing analyst manually researching commodity prices, labor rates, and manufacturing benchmarks. They build a spreadsheet — usually an elaborate one, with tabs for each cost component, sensitivity analyses, and scenario modeling. The process takes anywhere from two weeks to two months per category, depending on complexity.
This creates three structural problems.
First, coverage is extremely limited. Most procurement organizations can only afford to build should-cost models for their highest-spend categories — typically the top 10–20 out of hundreds. The long tail of spend categories goes unmodeled, which means procurement teams enter the majority of their negotiations without independent cost baselines. This is exactly the kind of gap where savings hide in plain sight.
Second, models go stale almost immediately. A should-cost model built in January reflects January's commodity prices, shipping rates, and labor costs. By March, the inputs have shifted. By June, the model is a historical artifact. In volatile categories — energy, chemicals, semiconductors — models can become unreliable within weeks. And with tariff uncertainty becoming a recurring feature of global procurement, static models are even more vulnerable.
Third, the expertise required creates bottlenecks. Building good should-cost models requires deep category knowledge, access to market intelligence databases, and analytical skills. In most organizations, this expertise is concentrated in a handful of senior analysts or expensive consultants. When they're occupied with one category, every other category waits.
The net result is that should-cost modeling — despite being one of procurement's most valuable tools — is practiced sporadically, by a few people, on a small fraction of spend.
The shift from periodic modeling to continuous price intelligence
This is where the landscape is changing. The convergence of AI, unified procurement data, and real-time market intelligence is making it possible to move from periodic, manual should-cost exercises to continuous, automated cost baselines.
The concept is straightforward: if you have clean, unified data on what you've paid historically, what your contracts specify, what market indices indicate, and how prices vary across business units and geographies, AI can continuously generate and update cost baselines without human analysts manually researching each category.
Suplari's price intelligence capability takes this approach. Rather than requiring teams to build should-cost models from scratch, Suplari's AI agents analyze your actual transactional data across all spend categories, detect pricing anomalies — where you're paying significantly more than your internal benchmarks, historical norms, or cross-business-unit comparisons suggest — and surface those opportunities for negotiation.
The distinction matters: traditional should-cost modeling asks "what should this item cost based on external research?" AI-driven price intelligence asks "where are we paying more than we should based on everything we already know about our own spend?" Both approaches produce negotiation leverage, but the AI-driven version covers your entire spend portfolio, updates continuously, and doesn't require category-by-category manual analysis.
How AI agents change the economics of cost analysis
The reason should-cost modeling has historically been limited to top-spend categories is economics: the cost of the analysis had to be justified by the potential savings. If it takes an analyst two weeks to build a model for a $500K category, the ROI may not be there. If an AI agent can generate a cost baseline in minutes, the calculus changes entirely.
This is one of the practical implications of what Suplari CEO Jeff Gerber describes as the AI agent opportunity in procurement: tasks that previously required expensive human expertise — or expensive consulting engagements — can now be automated, scaled, and continuously updated.
Consider what a comprehensive should-cost capability looks like when powered by AI agents:
Continuous anomaly detection across all categories. Instead of manually selecting which categories to model, AI agents monitor pricing patterns across your entire spend portfolio. When a price deviates significantly from historical norms, internal benchmarks, or market indicators, the system flags it — whether it's a $50M strategic category or a $200K tail spend item that nobody was watching.
Automated cost driver decomposition. For flagged items, AI can decompose costs using available market data, commodity indices, and your own historical pricing to estimate what each component should cost. This isn't as granular as a hand-built engineering should-cost model for a manufactured part, but it's accurate enough to identify significant overpricing — and it covers categories that would never justify a manual model.
Real-time updates as market conditions change. When commodity prices shift, tariffs change, or exchange rates move, AI-driven models adjust automatically. The baseline you negotiated against last month reflects this month's reality, not a snapshot from six months ago.
Cross-business-unit benchmarking. One of the most powerful — and most underutilized — sources of should-cost intelligence is your own organization. If your Chicago office pays $8 per unit for a consumable and your Dallas office pays $12, that's a pricing anomaly that requires no external market research to identify. Unified spend data makes this kind of internal benchmarking automatic.
When you still need traditional should-cost models
To be clear: AI-driven price intelligence doesn't replace deep-dive should-cost models for every situation. For highly complex manufactured goods with detailed bills of materials — aerospace components, custom machinery, specialized pharmaceuticals — you still need engineers and category experts building detailed bottom-up models.
The value of AI is in extending should-cost thinking to the 80–90% of categories where detailed engineering models were never feasible but where pricing anomalies and negotiation opportunities still exist.
Think of it as a pyramid:
Top 5–10 categories (highest spend, most complexity): Full engineering should-cost models, built by specialists, updated quarterly. These justify the investment in deep, component-level analysis.
Next 50–100 categories (significant spend, moderate complexity): AI-generated cost baselines using historical spend data, internal benchmarking, and market indices. Updated continuously. Surface the biggest gaps for human follow-up.
Long tail (hundreds of smaller categories): Automated anomaly detection flags items where pricing is significantly out of line. No formal model required — the data speaks for itself when you have visibility across the full spend portfolio.
This pyramid approach means procurement teams spend their expert time where it has the highest impact, while AI ensures no category goes completely unexamined.
Building should-cost capability: a practical starting point
If your organization doesn't have a systematic should-cost practice — or has one that's limited to a handful of top categories — the path forward doesn't require a massive investment in new tools or headcount. It starts with data.
Start with internal price variance. Before looking at external benchmarks, understand how much price variation exists within your own organization. This requires unified spend data across business units, geographies, and time periods. The pricing gaps you'll find internally are often larger than any external should-cost model would reveal — and they're immediately actionable because you're negotiating with suppliers who already serve you.
Focus on categories with high price variance, not just high spend. Traditional prioritization ranks categories by total spend. A more effective approach prioritizes by price variance — where the spread between your best and worst unit prices is widest. A $1M category with 40% internal price variance represents more savings potential than a $5M category where pricing is already consistent. Procurement KPIs should reflect this.
Layer in market intelligence gradually. External commodity indices, PPI data, and industry benchmarks add rigor to your cost baselines. But they're an enhancement, not a prerequisite. Many organizations delay should-cost initiatives because they don't have access to comprehensive market data. Start with what you have — your own transactional history — and enrich over time.
Make it continuous, not periodic. The biggest mindset shift is moving from "we build should-cost models before major negotiations" to "we continuously monitor whether what we're paying aligns with what we should be paying." This is the difference between spend analytics as a reporting function and procurement intelligence as an operating model. Suplari's approach to continuous monitoring means pricing anomalies are surfaced in real time — not discovered during annual category reviews.
The negotiation advantage
Ultimately, should-cost modeling exists to serve one purpose: giving procurement teams better leverage in negotiations. When you can walk into a supplier meeting with a data-backed view of what the price should be — decomposed into its components, benchmarked against internal and external data, and updated to reflect current market conditions — the conversation changes fundamentally.
You're no longer arguing about percentages off last year's price. You're having a fact-based discussion about cost drivers. That's a negotiation that sophisticated category managers win.
The organizations that are pulling ahead aren't the ones with the best spreadsheet templates. They're the ones that have figured out how to apply AI-driven intelligence to the 90% of spend that traditional should-cost approaches couldn't reach.
Suplari is an AI-native procurement intelligence platform that helps enterprise procurement teams surface pricing anomalies, generate cost baselines, and negotiate from data-backed confidence across their entire spend portfolio. Book a demo to see how continuous price intelligence replaces static should-cost spreadsheets.
