September 24, 2025 |

Value-Based Pricing Strategy: What It Actually Takes in B2B SaaS

Author

TL;DR Most value-based pricing advice is borrowed from B2C markets and fails when applied to B2B SaaS pricing strategy — where value is complex, evolving, and usage-dependent. Surveys can’t capture it. Competitor benchmarks can’t approximate it. The real work is analyzing how customers actually derive value from usage, segmenting by those patterns, and empirically testing willingness to pay through controlled price changes anchored in measurable outcomes, not perceptions.

We wrote a few years ago that 90% of value-based pricing in B2B SaaS is a hoax. That’s still true. The methods most companies use — borrowed from B2C markets where you can survey consumers about perceived value — don’t work when the product is complex, usage-dependent, subscription-based, and the buyer can’t tell you what they’d pay for innovations they haven’t experienced yet.

The term itself has been diluted beyond recognition. Every pricing consultancy claims to do value-based pricing. Most of them mean “we’ll run a conjoint analysis and a Van Westendorp survey, produce a slide deck, and tell you what we think customers will pay.” That’s not value-based pricing. That’s willingness-to-pay theater — and in B2B software, it produces answers that don’t survive contact with real deals.

Here’s what value-based pricing actually requires in B2B SaaS.

Why Most Value-Based Pricing Fails in B2B SaaS

The standard definition is simple: price based on the value your product delivers, not your costs. Every pricing textbook says this. The problem isn’t the concept — it’s how companies try to execute it. A B2B SaaS pricing strategy that survives contact with real customers requires very different inputs than the B2C-derived methods most consultancies still use.

Pricing consultancies routinely sell conjoint analysis and Van Westendorp’s Price Sensitivity Meter as “powerful tools for measuring price elasticity and determining optimal price points.” They produce precise-looking numbers built on error rates you wouldn’t tolerate anywhere else in your business. Software teams obsess over defect rates, uptime SLAs, and regression thresholds. Then those same teams accept WTP survey outputs with documented 2x systematic bias as “data” worth setting prices on. The numerical precision is real. The error baked into the inputs is the problem.

In B2C, survey-based methods — conjoint analysis, Van Westendorp, Gabor-Granger — are less unreliable because consumers can compare products, reference prices are visible, and purchase decisions are individual. They still produce biased results — peer-reviewed field experiments on Van Westendorp specifically found that the method systematically overestimates willingness to pay compared to incentive-aligned methods, and a separate study on hypothetical bias found stated WTP deviating by +31% on low-priced products. The bias is at least somewhat predictable in B2C. In B2B software, the same methods produce numbers that are not just biased but disconnected from the real decision dynamics entirely.

B2B software breaks even those fragile assumptions. Value is tied to complex usage patterns and operational impact, not feature comparisons. Purchase decisions involve multiple stakeholders across months-long cycles. And the most important problem: differential value is often concentrated in innovations the buyer hasn’t experienced yet. A customer can’t tell you in a survey what they’d pay for an operational improvement they can’t yet imagine. The survey produces a number. The number is wrong. And you build your entire pricing architecture on it.

But there’s a deeper problem that no survey accounts for: salespeople’s willingness to discount. It’s a false assumption that the customer primarily drives willingness to pay. Salespeople have significant sway over how much money customers are willing to put forward — and software executives routinely underestimate that sway. Salespeople function as entrepreneurs within software companies. They have distinct methodologies, broad discretion on pricing, and comp structures that incentivize volume over profitability. That combination manifests as discretionary discounting that no WTP survey captures.

Research on enterprise software deals found that seventy-four percent closed on the last day of the quarter, with late-quarter discounts averaging 35-37% versus 30% mid-quarter. That’s not negotiation — it’s salespeople pulling deals forward with deeper discounts to hit comp thresholds. Across a portfolio of deals, the excess discounting consumed 6.6% of total vendor revenue. For a $100M software company, that’s $6.6M in revenue given away not because buyers demanded it, but because the comp structure incentivized it.

A prospect might be willing to pay $30,000 a year for workflow automation software — until they learn from an existing customer that the salesperson tends to give out 20% discounts. Suddenly, the prospect won’t pay $30,000. And that discount doesn’t disappear at renewal. It becomes the new baseline, and the customer argues for another discount on top.

This is why value-based pricing is an emergent phenomenon, not a technique you apply. You cannot get paid fairly for the value you deliver if your sales team is horse trading in the marketplace. When two customers buy the same products at the same volume and pay wildly different prices, the pricing model isn’t value-based no matter what the strategy deck says. It’s whatever the salesperson negotiated that day. Value-based pricing only emerges when the licensing metric scales with value, the sales team has the fluency to defend the model, and pricing integrity means the same deal gets the same price — a principle we call market fairness. Remove any one of those conditions and value-based pricing collapses back into discretionary discounting with a better name.

This is why list pricing and discretionary discounting untethered to real product value leads to a proliferation of one-off deals. Customers buying the same products at the same volume end up paying wildly different prices. Sellers routinely get less than their software’s true value. These outcomes are the opposite of what a value-based pricing strategy should produce — and the opposite of pricing fairness, the foundation of sustainable software revenue.

Ready to Look Beyond Customer Perception to Usage Reality?

LevelSetter connects how customers derive value from usage patterns to licensing, packaging, and pricing decisions that capture that value at renewal.

Is Your Pricing Built on Survey Data or Real Deal Behavior?

If another firm delivered pricing recommendations from conjoint or Van Westendorp surveys, we can stress-test them against your transaction data before anything touches a customer.

What Value-Based Pricing Actually Requires

Look Beyond Perception to How Customers Derive Value from Usage

We can’t rely on what customers say to estimate software value. We need to understand what they do — purchases, usage patterns, operational outcomes. But raw data doesn’t interpret itself. Billing platforms and usage analytics tools can show you what customers did. They can’t tell you what it means for your pricing architecture. The insight isn’t in the data — it’s in knowing which patterns matter, which are noise, and what to do about the ones that signal a structural problem in how you charge.

That interpretation requires combining quantitative data with qualitative context — conversations with customers, product team knowledge, competitive dynamics — in ways that surface insights no dashboard produces on its own. When we analyzed data for a B2B software platform, we found enterprise customers consuming remarkably high quantities of a specific data type that the product team hadn’t considered a primary value driver. These customers were combining historical data with predictive models to anticipate operational disruptions — dispatching resources before problems materialized.

This was a fundamentally different need than other customer groups who only cared about the most recent flyover images. But the archived imagery was bundled into the base product — given to everyone, valued by a few. The opportunity wasn’t a price increase. It was a packaging change: segment the archive capability out, pair it with supporting functionality as an add-on for the customer group that derived real operational value from it, and simplify the base product for everyone else. The result was a model that captured more value through smarter packaging while actually becoming simpler for sellers and buyers.

This is the kind of creative manipulation of the SPP pricing trifecta framework that expertise makes possible. A billing platform can tell you which customers consumed the most data. It can’t tell you that the consumption pattern reveals a distinct customer group with a different value proposition — and that the right move is a packaging change, not a price change.

The most successful value-based pricing strategy in B2B software goes beyond customer-centric. It melds customer inputs with the provider’s perspective on value. An HR software company innovated on the standard annual performance review by enabling any review frequency — semi-annual, quarterly, continuous. Most customers hadn’t considered how more frequent reviews could affect their operations. But the provider saw business value in improved employee satisfaction, retention, recruitment, and cross-functional collaboration — and priced accordingly. Had they followed B2C value pricing methods — comparing competitor features and adding a 20% differential — they would have captured a fraction of the true value.

Segment by Value Derivation, Not Firmographics

Many companies don’t fully benefit from what they learn about their customers. Usage and value insights typically drive product roadmap decisions. Less frequently, they trigger packaging and pricing innovation. Even more rarely, they reshape customer mix and sales strategy in ways that produce substantial performance gains.

One of our clients had used traditional marketing techniques to define their “ideal customers” as organizations below a certain employee count. Anything larger, they assumed, would be too operationally complex for their software. But when we studied actual usage, we found larger organizations with decentralized operations loved the software — using and deriving value from it the same way smaller companies did. We redefined complexity as a function of operational characteristics rather than headcount. That changed their ideal customer profile, segmentation, and lead scoring — creating momentum that enabled them to beat sales forecasts month over month for the next 18 months. Today that company is poised to go public.

Another engagement made the point even more vividly. The client served professional services firms, and the conversation around customer groups was dominated by the largest enterprise accounts — the loud, visible segment everyone assumed was the core market.

Halfway through a session walking through capabilities and value drivers, someone on the client team made an offhand comment wondering how this would work for a completely different buyer profile. Nobody in the room had heard of that segment. When we dug in, an entirely distinct customer group emerged — one with different workflows, different value drivers, and a packaging opportunity the client hadn’t considered. The knowledge had existed inside the company all along, buried in one person’s head, but had never surfaced because nobody had asked the right questions in the right setting.

This is how customer groups actually get discovered. Customer groups are clusters of buyers who derive value in similar ways — and they rarely map cleanly to SIC codes, industry categories, or firmographic cuts. Most software companies get into trouble because they haven’t properly texturized their customer mix. What they call “segments” are usually just convenient demographic buckets that miss the unique pockets of value derivation hiding inside their customer base. Surface that texture and packaging opportunities appear. Miss it and you end up with t-shirt sizing that fits the average customer in each tier and fits no specific customer well.

Some consultancies arrive with the answer before they’ve seen the data — a Good-Better-Best structure, a fixed segmentation matrix, a branded framework with a memorable acronym. Three tiers might be exactly right for your customer mix. It might be exactly wrong. Two might fit better. Four might. The number of tiers and the boundaries between them should emerge from the texture of how distinct customer groups actually derive value, not from a consulting template applied universally. When the framework arrives before the analysis, you’re getting packaging by ideology, not by evidence.

Build Packaging Around Customer Groups, Not Segments

A quick clarification before going further: tiered packaging and tiered volume pricing are not the same thing. Volume-based unit pricing — where unit prices smoothly decrease as quantity climbs — is a defensible mechanism that rewards larger commitments. Tiered packaging is the standard consultancy advice that “tiered pricing caters to different customer segments” — Basic, Pro, Enterprise bundles built around assumed segment differences. The first is sound. The second is the failure mode this section addresses.

There’s a deeper consequence most companies don’t see until it’s too late. When packaging is built on a segmentation philosophy instead of a customer groups philosophy, every feature gets metered as a tier lever — 15 dashboards in Basic, 25 in Pro, 50 in Enterprise. Customers who need 22 dashboards and 45 user licenses don’t fit cleanly into any package. They’re trapped between tiers.

What happens next is exactly the horse-trading that breaks value-based pricing: salespeople, trying to close deals, give Enterprise-tier capabilities at Basic-tier prices to get customers comfortable. The discount is invisible on the invoice — it’s hidden in the package mix — but the damage is the same. Two customers buying the same capabilities pay wildly different prices depending on which salesperson they worked with and how hard they pushed. The packaging architecture itself is what makes market fairness impossible, regardless of how disciplined the list prices are.

This is why value-based pricing requires packaging built around customer groups rather than segments. When packaging reflects how distinct groups actually derive value, customers can find themselves in an offering that fits. They don’t need to negotiate their way to the right configuration — the right configuration already exists. The salesperson’s job becomes articulating value, not brokering cross-tier compromises. That’s the structural precondition for the transparency and consistency that value-based pricing requires.

The research on perceived price fairness reinforces this. Studies on subscription pricing have found that partitioned pricing — breaking a package into many metered components — reduces perceived value because it lowers perceived fairness, even when the total price is identical to an all-inclusive offering. Buyers don’t experience the granularity as choice; they experience it as friction and as evidence the seller is gaming them.

Value-Based Pricing Is a Philosophy, Not a Price Level

Here’s the most important thing to understand about value-based pricing, and the one most companies get wrong: value-based pricing is not charging the most to every customer. It’s not extracting maximum willingness to pay from each individual buyer based on their circumstances. That’s a different strategy with a different name — and a much shorter shelf life.

Sometimes a focus on what buyers are willing to pay devolves into what this buyer is willing to pay nowsituational pricing that confuses a customer’s circumstances with the product’s value. A salesperson picks up on a compelling event — a compliance deadline, a competitive threat, a board mandate — and prices into the urgency rather than the software’s contribution. Software leaders can fall into the trap of confusing this situational value with value-based pricing.

A customer rescued from a crisis may genuinely feel the price was worth it. That’s fine in the moment. The damage doesn’t come from the customer’s solo perception — it comes from comparison.

Pricing is not a secret. Prospects trade notes. Salespeople change companies and bring price lists with them. CIOs compare notes at conferences. The moment that customer learns someone else paid half the price for the same product — during a merger, at a conference, in a casual conversation — the relationship is damaged in ways no renewal discount repairs. The feeling isn’t just overpayment. It’s betrayal.

The research on price fairness is clear: customers treated inconsistently across a vendor base don’t just churn, they actively warn others. The tangled web of stories from situational pricing always surfaces.

Real value-based pricing works the opposite way. It can only emerge when there is transparency, consistency, and fair treatment of customers across the marketplace — when two customers buying the same products at the same volume pay the same price, and word of that consistency spreads. Buyers talk to each other. When they discover the pricing is fair and consistent, they stop negotiating against the company and start engaging with the value proposition. When they discover it isn’t, they negotiate harder, share what they paid, and erode the pricing for everyone. Value-based pricing emerges from market trust, not from extracting it customer by customer. You can’t command it. You build the conditions for it.

A defensible B2B SaaS pricing strategy is ultimately a statement of philosophy. Value-based pricing is no exception. If your operating philosophy is “charge the most each customer will bear given their situation,” customers and prospects will eventually recognize it — regardless of what you call it in your marketing. Customers are smart. Software companies sometimes forget that. Dressing situational pricing up as value-based pricing fools internal teams long before it fools the market.

The inverse philosophy — treating all customers uniformly and fairly — is significantly harder to execute, and that’s exactly why it’s differentiating. To pull it off while running the trifecta cleanly, you need a much better understanding of your customer mix, how customers actually use your software, how costs flow through the business, and how to balance the revenue portfolio across customer groups. Most companies default to extraction pricing because it’s easier. Treating customers fairly at scale requires operational maturity most competitors don’t have.

If you want to differentiate in the marketplace, get loud about your pricing philosophy — and then make absolutely sure the execution side of the operation, where list prices, scheduled net prices, discount boundaries, and every quote get calculated, aligns with and supports that philosophy. The philosophy is meaningless if the sales team’s compensation structure incentivizes the opposite behavior. The philosophy is meaningless if the packaging forces customers to negotiate across tiers. The philosophy becomes real when every mechanism in the pricing architecture — licensing, packaging, pricing, and the systems that operationalize them — pulls in the same direction.

Test Willingness to Pay Empirically, Not Through Surveys

Within that philosophy, you still need to find the right price points. The answer isn’t a survey. We recommend performing a series of controlled incremental price changes to push the boundaries of willingness to pay for customer groups with similar usage and value characteristics. This is empirical demand elasticity analysis, rooted in how customers behave — not what they say in a survey.

Recent academic work supports this approach. Bayesian hierarchical research on subscription products has demonstrated that willingness to pay can be identified directly from usage and subscription data, without needing the kind of price experimentation that disrupts customer relationships. The “Beyond Big Data” research is even more direct — observational data alone can’t establish causality for pricing decisions, but continuous controlled experiments can. The implication for B2B software is straightforward: build the system to test pricing changes against real customer cohorts, not to extract WTP from people who don’t yet know what they’d pay.

This method also harmonizes pricing with the rate of new value creation from your product roadmap. In B2B software, especially subscription-based, customer value perception contains a futures element — customers expect a stream of increasing value. Think of it as taking pricing validation steps in a journey that keeps you on the safe side of being paid fairly for your software’s value, without triggering the silent churn that comes from unvalidated price increases.

Why Value-Based Pricing Is Not a Pricing Model

This is where most articles on the topic go wrong. They present value-based pricing as one model among many — alongside cost-plus, competitive, and penetration pricing — as if you’re choosing from a menu.

Value-based pricing isn’t a model. It’s a principle that should inform every pricing decision you make. The question isn’t “should we use value-based pricing or cost-plus pricing?” The question is: “does our licensing metric, our packaging structure, and our price points reflect how customers actually derive value?”

Cost-plus pricing creates an artificial ceiling. When your software saves a customer $100,000 annually but your costs are $1,000, cost-plus with a healthy margin might give you $5,000. Value-based thinking says the price should reflect the customer’s outcome, not your cost structure.

Competitive pricing creates a downward spiral. Everyone benchmarks against each other, prices converge, and customers choose on features or relationships instead of value delivered. Nobody wins.

Both are symptoms of the same problem: starting with the wrong inputs. Cost-plus starts with your costs. Competitive starts with someone else’s prices. Value-based starts with how your customers use your product and what outcomes it creates for them. It’s not a third option alongside the other two — it’s the foundation that makes any pricing architecture defensible.

The licensing, packaging, and pricing trifecta is how this works in practice. Your value metric (licensing) captures how customers receive value. Your packages group capabilities by how customer groups derive value. Your price points reflect validated willingness to pay for that value. All three must be grounded in value — not one model among many you pick from a list.

From Value Principle to Operational System

Understanding value-based pricing as a principle is the easy part. Operationalizing it — making it work across every deal, every renewal, every new product release — is where most companies stall.

The gap between “we believe in value-based pricing” and “our pricing architecture actually reflects customer value” is where the work lives. Closing that gap requires three things:

Data that shows how value is actually delivered. Not survey responses. Not analyst benchmarks. Your transaction data, your usage patterns, your renewal outcomes. When a multi-location software company we worked with moved from per-location licensing to a usage-based model, the insight didn’t come from customer interviews — it came from analyzing location-based data and discovering that usage characteristics, not location count, determined how customers derived value. The transition faced initial sales team pushback, but the company is already gaining insights into willingness to pay based on actual behavior.

A system that evolves with the product. One of our clients built their value proposition around coordinating scheduled downtime in oil refineries. We discovered that while downtime minimization was the stated value, the deeper benefit was maximizing safety across the refinery’s partner ecosystem. That insight drove premium pricing for higher-value features and eventually spawned new mobile products generating entirely new revenue streams. A static pricing model would have missed this. Continuous monetization — where pricing evolves with the product roadmap — is what makes value-based pricing sustainable rather than a one-time exercise.

Software purpose-built for the process. Spreadsheets can’t track how deal configurations evolve over time, how customer groups respond to packaging changes, or where your pricing model diverges from actual value delivery. LevelSetter was built to close this gap — modeling licensing metrics, packaging structures, and pricing scenarios against real transaction data, then monitoring performance as the market responds. The Define, Deploy, Defend framework mirrors how value-based pricing actually works: define the architecture, deploy with controlled exposure, defend through continuous monitoring and adjustment.

Value-based pricing isn’t something you implement once. It’s a discipline — a way of making licensing, packaging, and pricing decisions that stays anchored to how customers actually use your product and what outcomes it creates. The companies that do this well don’t just capture more revenue. They build pricing architectures that are defensible at exit, resilient under competitive pressure, and trusted by the customers who pay them.

If your current pricing was built on cost assumptions, competitor benchmarks, or survey data that hasn’t been validated against real deals — talk to a B2B software pricing strategist.

FAQs

Ready for profitable growth?

Hit the ground running and learn how to fix your pricing.