April 12, 2026 |

The Value Metric Decision: How to Choose What You Charge For

Author

TL;DR — The industry debates pricing models — subscription vs. usage-based vs. hybrid — when the real decision is the value metric: what unit of measurement does your price attach to? A good metric is understandable by the buyer, estimatable before signing, and diverges from your cost structure so margins improve as value grows. The wrong metric trains customers to suppress adoption, forces pilots that stall deals, and creates revenue volatility in down cycles. One client went from $2,500 renewals to $600,000 by changing nothing but the metric. This article walks through how to make that decision using our licensing, packaging, and pricing framework.

One of our clients sold development licenses — a single metric, priced at $2,500, renewed every three years. After we restructured their licensing, packaging, and pricing, that same customer renewed for $600,000. The product didn’t change. The features didn’t change. What changed was the value metric — the unit of measurement their price attached to.

Not a 10% price increase. A 240x difference from choosing the right unit of measurement.

That’s what this article is about. Not which “pricing model” to pick. Not whether to go “usage-based” or “subscription.” The decision underneath all of those — the one that determines whether the model works or doesn’t.

What is a value metric?

A value metric is the unit of measurement your price attaches to. It’s the thing the customer sees on the invoice that connects what they pay to what they get.

Per-seat is a value metric — it assumes each person who uses the software receives roughly equal value. It’s the default for most SaaS pricing models and it works when the value is genuinely tied to human access. “Per-transaction” and “per-outcome” are concepts, not metrics — they’re only real when you define what the transaction is and what the outcome is. Per-invoice-processed is a metric. Per-simulation-run is a metric. Per-resolved-ticket-that-stayed-resolved is a metric. “Per-transaction” is a placeholder that means you haven’t finished the work yet.

When people say “outcome-based pricing,” they’re describing a metric choice. When people say “usage-based pricing,” that’s also a metric choice. The model name is shorthand for which metric you picked. The industry debates models — subscription vs. usage-based vs. hybrid — when the real question is the metric hiding inside each one.

This matters — especially in AI software pricing where cost structures are volatile — because the same “model” with two different metrics produces radically different businesses. A subscription wrapping per-seat and a subscription wrapping per-workflow are both “subscription pricing.” One trains customers to minimize their footprint. The other grows revenue as the customer gets more value. Same model, opposite outcomes. The metric is the variable.

The metric spectrum: from activity to outcome

Think of value as rain falling on a mountain. The rain collects at the peak — that’s your product’s raw capability. As water flows downhill it coalesces into a river that grows wider and more powerful. That river exits the mountain into the customer’s fields, where it irrigates crops and generates their harvest.

Your value metric is an injection point on that river — where you choose to gate and count the flow. Position it too far up the mountain, near the rain, and you’re giving away value for almost nothing. The customer loves you but you can’t sustain the business. Position it too far downstream, in the customer’s fields, and you’re taking a share of their harvest — and they’ll argue that’s not fair, because the crop required their soil, their labor, their sun. You’re claiming credit for value you didn’t fully create.

But position the gate right where the river leaves the mountain and enters their land — right at the boundary between what your product produced and what the customer does with it — and you have a metric that works for both sides. The software company gets paid fairly for the value it creates. The buyer pays fairly for the value they receive. Neither side is subsidizing the other.

That’s the metric selection problem in one image. Now let’s make it concrete.

Consider an AI-powered support platform. You could charge on:

  • Tickets created — the customer pays for activity. Low risk to you. But the customer didn’t ask for tickets to be created. They asked for problems to be solved.
  • Tickets closed — closer to value. But “closed” doesn’t mean “resolved.” The customer’s team might close tickets without solving them.
  • Tickets resolved — genuinely valuable. The problem was fixed. But who verifies resolution? And what happens when the fix didn’t stick?
  • Tickets resolved and stayed resolved for six months — the customer’s desired outcome. Maximum value captured. But now your revenue depends on whether the customer’s own team follows through on the resolution, whether the underlying product issue gets fixed, whether the customer’s own customers stop encountering the problem.

Each step closer to the outcome captures more value. Each step also takes on more risk. And the risk you’re absorbing isn’t just yours anymore — it’s the customer’s execution risk.

The art of metric selection is finding the point on that spectrum where the metric is close enough to the customer’s outcome that they recognize the value, but not so far downstream that you’re absorbing execution risk that belongs to them. That’s a judgment call — not a formula — and it’s different for every product and every customer group.

Some consultancies present this spectrum as a progression — start with resources, graduate to activities, then outputs, then outcomes. As if every company should climb the ladder over time. That’s a manufacturing maturity model applied to software. It misunderstands how B2B pricing strategies actually work in practice. It comes from industries where contracts are 10-20 years, change is glacial, and buyers resist any pricing innovation. Software doesn’t progress this way. A startup can launch on outcome-based pricing from day one. A company that’s been selling per-seat for a decade can restructure to a value-aligned metric in a single contract cycle. The position on the spectrum depends on your product, your customer, and how much execution risk you can absorb — not on your maturity stage.

Three properties of a good metric

Of course the metric has to scale with the value the customer receives — the foundation of value-based pricing — and we’ve been doing this work since the first software pricing projects in the 1980s, before most of today’s pricing consultancies existed. It’s common knowledge now. But scaling with value is table stakes. The properties that actually separate a good metric from a bad one are less intuitive.

1. The buyer can understand it

The customer should look at the metric and intuitively grasp what they’re paying for. “Simulation runs” works. “Acme Consumption Units” doesn’t — now you’ve invented an acronym, and you have to explain what it means, how it maps to what the customer actually does, and why one workflow costs 3 ACUs while another costs 12.

Some companies get away with invented consumption units in the infrastructure layer where the buyer is technical and expects abstraction. In the application layer, where the buyer is a business leader trying to budget for next year, it’s a dealbreaker. Research on AI pricing in healthcare found that 76% of decision makers rejected pricing based on technical usage metrics like token counts or API calls — they wanted metrics tied to clinical outcomes, not infrastructure consumption. If even healthcare buyers who deal with technical complexity daily reject abstract metrics, your enterprise software buyer certainly will.

If the metric requires your sales team to explain what it means, you’ve introduced friction that compounds at every stage of the deal.

This is one of the risks of credit-based pricing. Credits add an abstraction layer between the buyer and the value. When Cursor shifted from a simple request-based model to usage-based credits in June 2025, users ran out of their allotment after just a few prompts on advanced models. Billing complaints flooded Reddit, Trustpilot, and G2. The CEO issued a public apology. The problem wasn’t the price — it was that nobody could predict what a credit would cost them in practice. Some companies make credits work at scale, but the pattern is consistent: when the buyer can’t convert credits to dollars in their head, trust erodes fast.

2. The buyer can estimate it

Understanding the metric isn’t enough — the buyer also needs to predict what it will cost them. A metric that requires a spreadsheet to estimate is a metric that slows deals.

We saw this play out at a client whose entire industry priced on a granular unit of output — a metric that seemed logical because value scaled with volume. But we could see the market leader struggling to close enterprise deals. Their pipeline was stalling, and when deals did close, buyers were negotiating flat fees instead of accepting the per-unit model. When we dug into why, an enterprise buyer told us they ran nearly a thousand projects across internal teams and external stakeholders — and they had no earthly idea how many units of output that would generate. The metric was understandable. It just wasn’t estimatable.

By watching the market leader’s selling motion break down, we cut through the industry noise and moved our client to a metric one level up — the project itself, not the granular output inside it. A unit the buyer could count and predict. In a single 45-minute working session, LevelSetter swapped the metric, modeled scenarios against the client’s actual usage data, determined optimal price points, and computed customer-by-customer impacts at the line-item level for every legacy account. Fewer variables, simpler quotes, faster deals. The industry trend was wrong, and the market leader’s pipeline told us so before any research study could have.

When the buyer can’t estimate costs, they ask for a pilot — “let us try before we commit.” These buyer-initiated pilots are dangerous. They often lack executive sponsorship. They stall in mid-level committees. They push the real decision point down the road — deferring the crisis moment instead of resolving it. And they slow time to value, which means the ROI story you need for the expansion conversation takes longer to materialize. A metric that’s hard to estimate doesn’t just slow the first deal — it weakens every deal after it.

This is different from how a software company rolls out a new pricing model to its existing customer base. Some consultancies recommend “controlled pricing pilots” — testing new pricing with a subset of customers before broad launch. The intent is reasonable: validate before you scale. But every pricing model involves assumptions that need validation. The question is how you validate.

Our approach: start with new business. New customers have no legacy contract, no anchor price, no expectation to manage. They evaluate your new pricing on its merits. Roll out there first, learn from real deal dynamics, adjust what needs adjusting. Then bring the model to renewals — what we call the customer transition. We model every transition: who pays more under the new model, who pays less, and by how much. That picture is available every time we iterate, which means rollout plans for legacy customers are informed by real data, never rushed. Some customers transition at their next renewal. Some get a phased migration over two cycles. The plan reflects the reality of each account, not a blanket cutover date that treats every customer the same.

There’s nothing wrong with being cautious — pricing changes carry real risk and a deliberate rollout is smart. At BambooHR (see our case studies), we rolled out the new pricing model in a round-robin approach across the sales team — start small, learn, iterate, accelerate. Their CRO, Jeff Adams, was increasing the sales forecast 15% month over month while the rollout was still underway, and they beat their plan for the first 18 months. That’s what a controlled rollout looks like when the metric is right and the transition is modeled: confidence builds from data, not from a pilot committee.

But too often, caution becomes a point-in-time exercise delivered in a slide deck or a 60-page document that sits on a shelf until the next pricing crisis. We’d rather model, validate, and iterate inside a system that’s still running the next time you need to make a pricing decision.

3. It diverges from your cost structure

The ideal metric scales with value faster than your costs scale with delivery. If your metric tracks your costs too closely — per-API-call pricing when your biggest expense is API calls — you’ve built a pricing model with a ceiling on your margins.

An AI contract review tool costs pennies per document in inference but saves legal teams hours of associate time at $300+/hr. Price per contract reviewed and your margin expands with every deal. Divergence works when your product transforms cheap compute into expensive human judgment.

It fails when you’re reselling compute with a UI. If you charge per image generated and your biggest cost is per-image GPU time, your metric and your cost move in lockstep. You’re a thin margin that shrinks every time your provider changes pricing. Research on competitive software pricing models confirms this structurally: in competitive markets, the monitoring costs of tracking granular consumption can outweigh the revenue benefits of usage-based pricing, making fixed pricing more profitable in many scenarios.

For AI products, this divergence is harder to maintain because cost structures are uniquely unstable. Some pricing consultancies argue that falling inference costs make this easier over time — that margins naturally widen as tokens get cheaper. This reflects a surface-level understanding of how AI systems actually evolve. It’s only true at the base model layer. Production stacks get more expensive as companies add guardrails, RAG pipelines, multi-model orchestration, and verification layers to deal with hallucinations. Every component you add to make your AI reliable adds cost that the headline “tokens are cheaper” doesn’t capture.

And we haven’t seen the next cost shock yet. The emergence of world models — building on work from researchers like Yann LeCun and Gary Marcus — could fundamentally restructure the compute requirements for AI systems. These architectures reason about the physical world, not just text patterns, and their inference costs may look nothing like today’s transformer economics. A pricing metric designed around “costs only go down” will break — possibly sooner than most companies expect.

Metrics that discourage use

These three properties aren’t the only dimensions. The right metric also has to be fair across the range of use cases and — critically — it shouldn’t discourage use.

A metric that makes customers hesitate before clicking “run” because they’re worried about the bill suppresses adoption and makes the product feel like it’s nickel-and-diming. It’s one of the worst pricing errors a software company can make. Per-transaction pricing for a product the customer should use hundreds of times per day creates a mental tax on every action. The customer starts rationing their usage to control costs, which means they’re not getting the full value of the product, which means the renewal conversation is harder than it should be.

We had a client that charged on bandwidth — customers bought a 2GB or 5GB data plan. The software worked well, customers used it heavily, and usage climbed toward their caps. Then the warnings started. Customers told us they had designated someone as the “[Company Name] Tsar” — a person whose job was to monitor data usage and tell colleagues to stop using the software when they approached their cap. We heard this from multiple customers independently. The pricing model had created a new role inside the customer’s organization whose sole purpose was to suppress adoption of the product.

The free tool took root in the workflow. Over time it cannibalized the client’s usage — customers were now under their data caps, using the paid product only for premium projects. When Covid hit, customers who were already underusing their allotments downgraded en masse. Revenue collapsed.

This isn’t an anomaly. A 2025 survey found that 41% of software buyers cite unpredictable costs as their primary concern with usage-based pricing models. Buyers instinctively avoid metrics that create consumption anxiety — and when they can’t avoid them, they create internal roles to manage it.

This is the double-edged coin of usage-based metrics. Usage sounds great when you’re growing — revenue scales with adoption. But in down cycles, usage-based metrics can force capital raises at exactly the wrong time. And when the metric itself discourages use — as bandwidth caps did here — you’ve built a pricing model that actively pushes customers toward free alternatives. The metric didn’t just fail to capture value. It destroyed value that already existed.

The principle is simple: if the metric creates friction between the customer and the value, it’s the wrong metric. The best metrics are invisible during use and visible only on the invoice — the customer uses the product freely, and the bill reflects the value they extracted.

Metrics aren’t static

Today’s metric might work for your first wave of customers. But as the cost picture evolves, as your product capabilities expand, as you learn which customer groups extract the most value — a change in metric may be needed.

This is why pricing isn’t a project with a deliverable. It’s an iterative process — continuous monetization — where the metric, the packaging, and the pricing all evolve as the business learns. Companies that gear for this flexibility from the start — avoiding rigid CPQ systems that hardcode pricing logic, choosing B2B pricing software and entitlement solutions that can adapt to metric changes without a re-implementation — will be better positioned than companies that lock themselves into a model and hope it ages well.

Tie pricing decisions to your product development lifecycle. Every new feature, every new capability, every new customer group should trigger a pricing review. Not a pricing project — a review. Does the metric still capture the value this feature delivers? Does the packaging still make sense for this customer group? Do the price points still produce rational net prices at all volumes?

When pricing is embedded in the product lifecycle, you never have to do a standalone pricing project again. That’s the difference between hiring a pricing strategy consultancy for a one-time project and building pricing as a discipline.

The boardroom conversation

Metric selection isn’t a product team decision or a finance exercise. It’s an executive decision that sometimes involves the board.

Choosing a metric that’s too close to the customer’s outcome can expose the company to revenue volatility tied to factors outside its control — a risk that’s especially acute in enterprise SaaS pricing where deal values are large enough to move the quarterly forecast. Choosing one that’s too far upstream can cap revenue growth at exactly the moment the product is delivering the most value. Metric selection hits right at the center of business model risk.

This is why the metric decision belongs in the boardroom, not in a pricing spreadsheet. The executives and board need to understand the risk profile of the metric they’re choosing — and they need choices, not a single recommendation from a consultant who says “trust me.” Simulate various models. Show the customer transition impacts at each step. Let the leadership team pick the risk level they’re comfortable with. This is what LevelSetter was built for — running those simulations in milliseconds, not months. “Trust me” pricing is for the COBOL days.

There is no universal metric

There is no code to crack. No competitor to copy, no best practice to refer to. The right metric for one company is not necessarily the right metric for another, even if they are direct competitors. Customer workflows vary — you don’t have the exact same footprint of customers as your competitor, which means the value your product delivers, and the way it delivers it, is unique to your business.

That’s why you never copy a competitor’s pricing model. (For more on the dangers of situational pricing and why ad hoc approaches fail, see our analysis.) And it’s why you never hire a consultancy that looks at your competitors, defines the “best practice” as whatever metric everyone else is using, and tells you to adopt it. That approach guarantees you’ll be average — priced like everyone else, competing on features and discounts instead of value.

How we approach metric selection

The metric decision is the first step in our licensing, packaging, and pricing framework — the trifecta we use on every engagement.

Licensing defines the metric, the terms and conditions, and the architecture of the revenue stream. This is where the metric selection lives.

Packaging groups capabilities around the metric based on how different customer groups derive value — not how the product is organized internally. (This is where software monetization becomes operational — you’re building the structure that revenue flows through.)

Pricing defines the numbers — the pricebook, list and net prices across all configurations and volumes. This is where you simulate outcomes, check demand response, and give the board choices.

The three layers integrate. When they’re coherent, the customer understands what they’re paying for, sees a package that matches their use case, and pays a price attached to a metric that reflects the value they receive. When any layer is missing, salespeople paper over the gaps with ad hoc discounting.

The metric is the foundation everything else sits on. Get it right and the model follows. Get it wrong and no amount of packaging or discounting fixes it.

If you’re making this decision — for an AI product or any software product where the old model isn’t working — see how we approach it or start the conversation. We’ll help you find the metric. Then we’ll build the licensing, packaging, and pricing around it.

FAQs

Ready for profitable growth?

Hit the ground running and learn how to fix your pricing.