April 28, 2026 |

Outcome-Based vs Consumption-Based AI Pricing | SPP

Author

TL;DR. On April 16 we documented the six structural flaws of credit-based AI pricing, including the vendor-controlled conversion rate that becomes the load-bearing pressure point at renewal. Ten days later, the trade press bundled Atlassian and HubSpot together as the latest names in an “AI flat fees retreat,” citing cases like the 6,000-seat Atlassian Rovo customer who renegotiated a credit cap and price-protection clause to avoid roughly $180,000 of projected overage. The headline framing collapsed the actual decision underneath: outcome-based vs consumption-based pricing are not different versions of the same shift, they are opposite bets on the licensing-model axis. Atlassian metered attempts (credits). HubSpot metered successes (per-resolved conversation, effective April 14). The frame that makes the difference visible is the value metric, not the pricing model.


The framing error in the headline matters because it hides the actual decision. “Moving away from flat AI fees” describes a direction, not a structure. A team reading the headline learns that per-seat is under pressure for AI features. They do not learn what to do instead, because two of the most-watched B2B SaaS vendors picked opposite answers to the same question, and the press treated those answers as a unified trend. The frame that makes the difference legible is the one we have been writing into Pricing Model vs Value Metric and the three pricing decisions for months. Pricing changes that look like pricing-model moves are usually licensing-model moves wearing the wrong label, and the right place to start is the SaaS pricing models architecture underneath.

What actually changed in the last 30 days

Atlassian: a packaging move and a licensing-model move, in the same announcement

Atlassian made two simultaneous decisions on Rovo that get reported as one. The first was a packaging change: Rovo Search, Chat, Agents, and Rovo Studio (their no-code agent and automation builder) moved from being a separate add-on into Cloud Standard, Premium, and Enterprise subscriptions. Non-subscribers pay $5 per user per month as a standalone subscription. That is the “AI as structural part of the offering, not a paid add-on” decision we unpacked in the AI-as-add-on coverage of credit-based pricing. Atlassian likely read the dismal attachment rates of AI-as-add-on packaging across the industry and chose to bundle.

The actual subscription price for the bundle sits at the Teamwork Collection layer rather than at any individual Cloud product. Teamwork Collection Premium is priced at approximately $29 per user per month for 50–250 users, scaling down to approximately $22.63 per user per month for 1,000 users on annual billing. Teamwork Collection Enterprise is custom-priced based on user count, feature requirements, and commercial context. That seat-based subscription price is the floor; everything Rovo-related (bundled credit allowances, overage charges, Rovo Dev consumption) sits on top of it. The structural implication for buyers is significant. Organizations evaluating Rovo costs need to model the full Teamwork Collection price against their current Atlassian subscription rather than treating Rovo as an additive line item on top of existing spend. The “what does Rovo cost” question collapses into the broader “what does our Atlassian subscription become.” The answer depends on which Cloud product family the organization is on today and whether moving up editions to access higher credit allowances is a separate cost dimension.

But the bundle is qualified. Even though Rovo access is included in Cloud subscriptions, consumption is still metered in credits, and the meter operates with two distinct levers. The first sets the scope of what is available within the base subscription. Atlassian’s own usage-limits documentation publishes per-product credit allowances that differ by both edition and product. Standard, Premium, and Enterprise plans of Jira, Confluence, and Jira Service Management each include 25, 70, and 150 credits per user per month respectively. Teamwork Collection plans, where Rovo lives natively, include 250, 700, and 1,500 credits per user per month at the same three editions, an order of magnitude higher than the per-product Cloud allowances. Critically, all credits pool at the organizational level rather than being assigned per user. A 100-seat Teamwork Collection Premium organization draws from a shared 70,000-credit monthly pool rather than 700 per individual user. Structurally, the meter behaves more like a concurrent-user metric than a per-user license: a shared resource the organization rations internally. Whether Atlassian provides administrative governance to allocate, throttle, or cap the pool by user, team, or product surface is not detailed in the public materials we reviewed, and procurement teams will need to ask the question explicitly during evaluation rather than assume one answer or the other. The second lever sets the cost of consumption inside the bundled pool. Rovo Search, summaries, chart insights, and definitions are free. Rovo Chat and standard Rovo Agent interactions consume 10 credits per request. Deep Research consumes 100 credits per request. Unused credits do not bank into the next period. Atlassian has stated publicly that the plan is to “monetize it with consumption-based pricing over time,” and Atlassian’s own documentation confirms that extra usage above the allowance is not currently billed — the company has committed to providing at least 90 days’ notice and requiring explicit customer opt-in before overage charges activate. Overage is therefore an anticipated future state, not a current one.

What Atlassian shipped is therefore a third packaging pattern, distinct from both pure bundle and pure add-on: forced bundled consumption with anticipated metered overage. The credit allowance is included in the subscription whether or not the customer uses it. A buyer signing a Teamwork Collection Premium contract for 1,000 seats pays for 700,000 monthly credits regardless of whether the team actually consumes that volume. Atlassian has presumably modeled typical usage and sized the allowances to match, but the modeling is opaque to the buyer. A customer has no way to know whether their usage will fit inside the bundled pool, whether usage will be concentrated in a way the pool cannot absorb, or whether most of the bundled allowance will go unused. The bundle is intended to address the AI-as-add-on attach-rate problem that we documented across the application-layer market: sophisticated buyers say no to capabilities they have not yet experienced and cannot estimate the value of. But the hybrid structure does not eliminate the problem cleanly. It transfers the friction. The attach-rate question (will the buyer pay for AI as a standalone line item?) becomes a renewal question (will the buyer accept the credit conversion rate, the future overage exposure, and the metric opacity?). The bundle delivers the AI capability into the customer’s environment whether they asked for it or not, while the meter preserves the consumption-based monetization that pure-bundle packaging gives up. Both the attach-rate problem and the metric-opacity problem now live in the same contract, mediated through credit-cap negotiation rather than line-item negotiation. Atlassian’s billing documentation states explicitly that overage is “in future” and will require 90 days’ notice plus an explicit opt-in before charges activate. That is a policy commitment, not a contract term. The distinction matters at renewal. A widely circulated case study from a software-licensing advisory firm describes a 6,000-seat Atlassian Cloud Premium customer that was modeled to face roughly $180,000 of projected future overage exposure over 24 months once Atlassian activates overage billing, and that successfully renegotiated an Enterprise upgrade with a credit-cap and price-protection clause in advance of activation. The customer did not push back on the bundled-Cloud price or the $5 standalone line item. They pushed back on the credit conversion rate, which is the lever the vendor controls unilaterally and the lever the policy-not-contract framing leaves open.

Atlassian already ships three pricing dimensions and two credit currencies

Standard Rovo is not Atlassian’s only credit-based AI product. Rovo Dev (the coding-focused capability inside Atlassian’s developer tooling) operates on a separate consumption-based model: $0.01 per credit, 2,000 credits per developer per month as the standard allocation, and credit consumption that varies by interaction complexity. Atlassian’s Rovo Dev pricing page describes the capability as “powered by pairing Atlassian’s Teamwork Graph with frontier AI models” and notes that “the amount of credits an interaction uses depends on the complexity of that interaction” without publishing per-model rates. So the same Atlassian buyer is already tracking three separate pricing dimensions: a per-seat Teamwork Collection subscription that scales by user count, a standard Rovo credit pool that is org-pooled and metered at flat per-request rates with no published per-credit dollar conversion, and a Rovo Dev credit allocation that is per-developer, complexity-scaled, and priced at $0.01 each. Two of those three dimensions are credit-based, and the two credit “currencies” do not behave the same way. A fourth credit currency is likely to arrive when Atlassian extends Rovo into another product surface.

This is the second structural flaw from our credit-based pricing analysis operationalizing in real time. Credits become incomprehensible at portfolio scale, and the portfolio scale is already inside one vendor before procurement has to compare across vendors. The complexity is not theoretical. Peer-reviewed research on AI pricing in B2B settings finds that 71% of decision-makers prefer hybrid pricing models (a predictable base fee paired with a variable component) — but only when the variable component is tied to clinically meaningful, value-aligned units. The same research finds that 76% reject pricing schemes built on technical usage metrics like tokens or API calls as “economically and operationally misaligned” with how their organizations budget. Atlassian’s hybrid implementation puts the variable component on credits, which is the type of technical, opaque metric the same research identifies as the wrong injection point. Software-licensing-advisory coverage describes a “critical source of confusion” in Rovo licensing between AI capabilities included natively in Cloud plans, capabilities that consume from the credit quota, and capabilities that carry usage-based costs once overage activates. That framing is third-party rather than Atlassian’s own positioning, but the structural complexity it describes is the structural complexity buyers actually face. Translation work the vendor leaves to the customer is the lever procurement teams will negotiate against.

The pattern is not new. In a prior SPP engagement with a survey-software vendor, an enterprise customer pushed back on a per-response license that would have required forecasting consumption across roughly 70,000 surveys, internal and external. The customer laughed at the request — not because the metric was complicated, but because the forecasting work the metric required was structurally impossible at that volume. The deal landed as a flat fee. The same dynamic plays out at every credit-cap negotiation we have seen since: when buyers cannot estimate the consumption volume the vendor is asking them to commit to, they push the meter off the contract and onto a fixed structure they can budget against. Enterprise procurement teams will not pattern-match two different credit currencies inside the same vendor as consistent metering. They will read it as opacity that compounds with every product release, and they will negotiate caps on each currency independently at renewal. The selling motion underneath this requires the Atlassian sales force to get enterprise buyers comfortable with a secondary cost dimension they have no way to estimate without telemetry. Expect the bulk of credit-cap conversations at the contract table to come from this layer, not from the standard Rovo bundle.

HubSpot: Breeze from per-conversation to per-resolution

On April 14, HubSpot moved its Breeze Customer Agent from $1.00 per conversation to $0.50 per resolved conversation. The Prospecting Agent moved from a flat monthly charge per contact enrolled to $1.00 per recommended lead. The first move is unambiguous: the metric shifted from a system attempt to a system success. The second move is more subtle. A recommended lead is a system action, not a customer outcome, so Prospecting now sits between consumption and outcome on the metric spectrum rather than at either pole. Two simultaneous metric moves on two related agents in a single announcement is a signal that HubSpot is willing to ship metric experiments at speed — a cadence worth reading separately from the direction of either change.

Why the headline framing breaks

Trade press, pricing newsletters, and most secondary commentary wrapped both moves into “an industry shift away from flat AI fees.” That sentence is technically true and analytically empty. Both vendors changed two of three decisions in their pricing architecture, but different two, and the differences matter:

Decision layer Atlassian Rovo (Feb 2026) HubSpot Breeze (April 14, 2026) GitHub Copilot (June 1, 2026)
Licensing model (metric, term, allocation) Changed. Metric: credits at complexity-scaled rates. Allocation: org-pooled monthly allowances. Term: Cloud subscription term unchanged. Changed. Metric: per-resolved conversation (Customer Agent); per-recommended lead (Prospecting Agent). Term and allocation unchanged. Changed across all three sub-dimensions. Metric: token-based AI Credits at per-model API rates. Term: annual plans retired (monthly only). Allocation: org-pooled for Business/Enterprise (was per-user).
Packaging Changed. Add-on → forced bundled consumption with metered overage (hybrid pattern, neither pure bundle nor pure add-on) Changed (modestly). 28-day free trial added on Customer and Prospecting Agents; intra-family pricing-model differentiation across Breeze agents (per-outcome on flagship agents, credits on others) Changed (substantively). Pro/Pro+/Student sign-ups frozen indefinitely; model availability redrawn across plans (Opus removed from Pro); Pro+ allotment lifted to 5x+ Pro
Pricing model (unit price + governance) Implicit per-credit lever; vendor-controlled conversion rate Changed. Customer Agent unit price moved $1.00 → $0.50 Changed. Per-token rate replaces premium-request-unit rate; model multipliers up June 1 for annual holdouts

Atlassian metered finer-grained attempts and changed how AI sits in the offering structure. HubSpot metered customer-recognizable successes, dropped the unit price, and made smaller packaging adjustments (28-day free trial added, intra-family pricing-model differentiation across Breeze agents). Reporting them as the same trend collapses the decision that pricing teams now have to make about which layers to move and how deeply to move at each one.

GitHub Copilot has since joined this cluster on a June 1, 2026 timeline, becoming the third major AI vendor in 30 days to reprice. Where Atlassian moved at packaging plus licensing-model and HubSpot at licensing-model plus pricing-model, GitHub moved across all three architectural layers in a single announcement. We documented the full structural decomposition in GitHub Copilot’s April 2026 transition to token-based AI Credits. The pattern matters because it forecloses the easier read that any single vendor’s move was idiosyncratic. Three vendors with different starting positions made three different metric bets, but the cluster does not converge on a single direction along the variability axis. Atlassian and GitHub pushed cost variability to the buyer because credits and tokens are consumption metrics that scale with attempts. HubSpot absorbed it, carrying execution risk on every unresolved conversation. The convergence is in the trigger (agentic AI breaking the original pricing model) and in the response (changing the value metric), not in the direction the variability moved.

Are Your Licensing and Packaging Decisions Moving Together?

Like Atlassian’s dual announcement, your AI features probably need simultaneous licensing-model and packaging changes. We’ll assess whether your current architecture handles both layers coherently.

The licensing-model decomposition the broader pricing coverage skipped

Consumption and outcome are different bets on where execution risk sits

Consumption-based pricing puts volatility on the buyer. Every interaction the system runs adds to the bill, regardless of whether the system produced the outcome the buyer wanted. The 6,000-seat Atlassian customer flagged $180,000 of projected overage precisely because credit consumption is decoupled from observable success. Outcome-based pricing, and resolution-based pricing as the operational sub-type, puts volatility on the vendor. If Breeze fails to resolve a conversation, HubSpot does not bill for it. The vendor is now exposed to its own product’s resolution rate as a direct revenue lever. Same surface label (“we changed our pricing”), opposite risk transfers underneath.

The metric is a licensing-model decision, not a pricing-model decision

This is where the industry vocabulary keeps tripping over itself. The general-noun “pricing model” gets used loosely as a catch-all for any decision that affects the invoice. In SPP’s framework, and increasingly in the field as more vendors run public experiments, the term breaks into three discrete decisions. The metric the price attaches to is a licensing-model decision. The unit price, discount rules, and governance around exceptions are the pricing-model decision (the framework-specific one). Bundling and offering structure is packaging. Atlassian and HubSpot did not change their pricing model in the framework-specific sense. They changed their value metric, which lives in the licensing-model layer upstream. Conflating the two is the mistake we unpacked in Pricing Model vs Value Metric. The industry catch-all label “pricing model” is a wrapper that hides those three decisions inside one undifferentiated noun, and the value metric is the cargo that determines whether the architecture aligns with how the customer derives value.

Where each bet is brittle

Credits inherit six structural flaws we documented in Credit-Based Pricing for AI: exposed cost structure, incomprehensibility at portfolio scale, behavioral economics around prepaid pain, vendor-controlled exchange rates, prepaid cash masking, and a revenue ceiling tethered to infrastructure margins. The Atlassian renegotiation is flaw four operationalizing in public. Resolution-based pricing avoids most of those flaws but inherits a different one: convergence risk. As autonomous resolution rates climb toward the 80 percent benchmark industry analysts now project for 2029, per-resolution and per-conversation pricing collapse on the same monthly bill. The vendor’s per-unit price has to drop in lockstep with rising resolution rate, or the customer’s bill grows faster than their volume, which converts the metric advantage into a renewal headwind eighteen months out.

There is a meta-signal worth naming separately. Notice how much explanation Atlassian’s pricing has required to make legible: the Teamwork Collection seat price, the bundled credit allowance, the org-pool dynamic, the per-credit dollar conversion that is published in one product surface and not the other, the model-by-model variability inside Rovo Dev, the 90-day overage notice that is policy and not contract. None of those layers are individually unreasonable. The cognitive load is the layering itself. The amount of text it takes to describe a pricing model is a structural signal about where that model sits on the transparency spectrum. Every additional layer of explanation transfers some of the modeling work from the vendor to the buyer, and every layer of modeling work the buyer is asked to do without telemetry is a cap negotiation waiting to happen. HubSpot’s pricing is not simpler because HubSpot is smarter. It is simpler because HubSpot moved fewer layers in this announcement, and the layer they moved (the metric) is one buyers can verify against observable outcomes.

A third brittleness layer compounds the first two and gets less attention across trade press and pricing-newsletter coverage. Software-licensing-advisory firms (which sell negotiation services to enterprise buyers and therefore have a commercial interest in framing buyer-vendor pricing as adversarial) commonly point to Atlassian’s July 31 fiscal year-end as the window when sales reps are most motivated to close, and frame June and July as the period when buyers should push hardest on commercial terms. The framing reflects advisory-firm playbooks rather than Atlassian’s own positioning, but the structural pressure it puts on the architecture is real because the playbook is well-known to procurement teams across the customer base. Treating the fiscal calendar as the negotiation lever is not architectural strength. It is the embrace of wild-west pricing in place of an architecture built on fairness, and peer-reviewed pricing research backs the structural concern directly. Without clear governance policies and authority thresholds, negotiation concessions erode profitability and create inconsistent client treatment. The buyer-side dynamic compounds the problem in B2B software. Enterprise procurement teams compare notes. Every buyer who closes a discounted deal in a fiscal-year-end push will share the structure with the next buyer, who walks into the next negotiation already armed with the previous buyer’s terms as a reference price. The fiscal-year-end becomes an institutionalized horse-trade that erodes the margin floor across the entire customer base, and the erosion is invisible in any single deal review. The architectural alternative is the discipline of consistent pricing governance: clear authority thresholds, walk-away criteria, and discount structures that hold across customers rather than reset every July. That is the difference between pricing as a discipline and pricing as quarterly chaos.

A fourth brittleness completes the picture and stems from the same advisory-firm playbook. The same negotiation-advisory coverage instructs buyers to prepare a competitive summary for the Atlassian account team (GitHub Copilot pricing, Microsoft 365 Copilot terms, standalone AI tooling) and to share it as part of the commercial conversation. Again, this is advisory-firm framing rather than Atlassian’s own positioning. The framing is procurement 101: signal market awareness, drive vendors to compete on price, secure a better deal. The structural problem is that the advice teaches Atlassian’s customers to do work Atlassian’s value proposition should make unnecessary. When buyers have to surface competitive context during the commercial conversation, the vendor has effectively outsourced its competitive analysis to its customers and is inviting head-to-head price comparison in place of value differentiation. Peer-reviewed game-theoretic modeling of competitive software markets shows formally why this matters. Pricing-scheme differentiation produces positive profits for both firms only when monitoring costs prevent direct head-to-head comparison. Once pricing becomes directly comparable across vendors, the only remaining lever is price itself, and margins compress toward zero on both sides of the market. The competitive-undercut tactic, encoded as standard procurement practice, is the buyer-side mechanism that drives that compression. Every customer who walks into the room with a comparison sheet flattens the differentiation that lets the vendor capture margin, and every vendor who responds with a discount confirms the comparison is the right frame. The architectural alternative is value-based differentiation that gives the buyer a reason to pay something other than the lowest published competitor price. Treating the competitive-undercut conversation as the lever rather than as the symptom is the advisory equivalent of teaching customers to commoditize the vendor.

What pricing teams should do this quarter

Stop calling it “AI pricing.” Name the layer that’s moving.

Most decisions framed as “AI pricing” decisions are actually licensing-model decisions: the metric the price attaches to. A few are packaging decisions: bundle Rovo into Premium versus sell it standalone. Fewer still are pricing-model decisions: the discount approval matrix, floor pricing on resolution rates, the credit-cap governance that the Atlassian customer just forced into their renewal. The label collapse — calling all three “pricing” — is what produces bad decisions, because teams optimize the wrong layer. Naming the layer first is the cheapest precision a pricing team can buy.

If you are shipping AI features, pick the metric before you pick the unit price

The unit price is the easiest number to argue about and the least important number to set first. We made the longer version of this argument in AI Monetization Strategy vs AI Tool Adoption. The short version: credit-based pricing is rarely the right injection point in the application layer. It is a structure that defers the metric decision rather than answering it, and the deferral becomes visible at renewal, when the customer realizes the conversion rate is the actual price and asks for a cap.

Watch iteration cadence as a signal — and whether it is intentional

HubSpot moved the metric on two related agents in a single announcement, without rebidding the entire Breeze offer. The natural read is that this is continuous pricing in the public market. The honest read is that we cannot tell yet whether the cadence is intentional (architecture and instrumentation built for repeated iteration) or accidental (the vendor iterating because the market is forcing the issue, paying full engineering, sales retraining, and contract-amendment cost on every move). Most B2B SaaS vendors will end up in the continuous regime within twelve months whether they chose to or not, because AI agent pricing forces the cadence on them. The decision that separates the winners from the rest is whether they built the architecture to make iteration cheap. The diagnostic is whether the next move costs less than this one. Vendors stuck in accidental continuous pricing pay full freight every time, and the iteration becomes a tax. Vendors operating intentionally see the cost per iteration drop as the architecture matures.

Model Which Layer Actually Drives Your Revenue Impact

LevelSetter separates licensing-model effects from packaging effects across your portfolio. See whether changing your metric or changing your feature bundling creates more revenue upside.

What we will be watching next

Whether HubSpot’s resolution rate climbs into the convergence zone

Public coverage suggests Breeze is currently in the mid-60s percent on resolution rate. As that number climbs into the 80s, per-resolution and per-conversation pricing converge on the same bill. The next move, when it comes, will be either a per-unit price reduction or a metric refinement like resolved-and-stayed-resolved. The first is a pricing-model change: the unit price is what attaches to the metric. The second is a licensing-model change, sitting upstream of pricing where the metric itself is defined.

Whether Atlassian layers credits before it touches the unit price

The current Rovo structure prices credits flat: a per-credit licensing model with a single fixed unit price. The natural next move is not the pricing-model move (volume breaks or committed-use discounts on the unit price). It is the licensing-model move: layers inside the credit itself, where different actions consume different credit counts. Some actions cost two credits, others cost five. Once that menu exists, a credit is not really a credit anymore. The metric has mutated, and the customer is negotiating against a moving conversion table rather than a unit they can audit.

Volume discounts on the unit price are the cleaner pricing-model option, and they are what most pricing frameworks recommend. Most vendors avoid them. The standard argument is that credit costs scale roughly linearly with consumption, so a flat per-credit price already reflects cost-to-serve. The sharper reason is that the opacity of layered credit consumption creates room to offer volume incentives implicitly. If the conversion table is opaque, the price curve is opaque too, and the vendor never has to publish a discount schedule for procurement to benchmark.

Vendors like Atlassian rarely go backwards on a model they have already shipped. They sculpt around it and try to improve it. A flat-price credit system that absorbs new consumption rates is exactly that kind of sculpt. The result is more complexity, not less, and the complexity accretes inside the licensing model, where the buyer can see it least.

Watch the fall product cycle.

Whether a fourth vendor joins within 90 days

Three of the most-watched B2B SaaS vendors repriced AI in 30 days. The structural pressure underneath the cluster, agentic workloads breaking the cost structure of the original pricing model, applies symmetrically to Anthropic’s Claude Code, Google’s Gemini Code Assist, Microsoft 365 Copilot, Salesforce Agentforce, and any other vendor selling agentic capability against a per-seat or flat-rate license. Whether one of those vendors moves before July 31 will tell us whether the 30-day cluster was a coincidence or the beginning of an industry-wide repricing wave. The architectural alternative, a metric that pulls cost variability back from the buyer, remains structurally open. Whoever takes that path differentiates against the rest.

Whether multi-model coexistence stabilizes or simplifies

Salesforce Agentforce reported approximately $540M ARR in Q3 FY26 and currently operates four concurrent pricing structures: conversation-based ($2 per conversation), Flex Credits ($500 per 100,000 credits, equivalent to $0.10 per standard 20-credit action), per-user licensing across multiple editions (Agentforce User License at $5 per user per month with required Flex Credits, Agentforce add-ons at $125–$150 per user per month, Agentforce 1 Editions starting at $550 per user per month), and a free Foundations edition that includes 200,000 Flex Credits, 250,000 Data Cloud credits, Agent Builder, and Prompt Builder for any customer on Enterprise Edition or above. The free Foundations edition is structurally interesting because it is not just a marketing motion. It is a “we do not yet know what to charge for” signal embedded in the pricing page itself. Salesforce gives away credits to drive activation, then sells up into paid Flex Credits or per-user licenses depending on which structure the customer’s usage pattern actually fits. Multi-model coexistence is not a stable resting point, and it is not a durable one either. Vendors who live there for years are not occupying a comfortable equilibrium. They are absorbing the cumulative cost of chasing two goals at once that the underlying architecture has not yet reconciled: accuracy in matching price to the value the customer derives, and cost control on the vendor’s own margin. The chase itself breeds complexity across all three architectural layers. Each new pricing structure added to capture a usage pattern more accurately or to protect margin in a specific scenario adds another decision the licensing model has to support, another option the packaging has to express, and another rule the pricing model has to govern. The complexity compounds layer by layer until the vendor either converges on a single canonical metric, formalizes the multi-model structure as packaging (different metrics for different customer groups), or watches the cumulative complexity become its own renewal-table problem. That decision is upstream of price, and downstream of how each vendor reads the metric experiments now playing out in public, including GitHub Copilot’s April 2026 transition to token-based AI Credits.


The decisions a pricing team faces with AI features are not new decisions. They are the same three decisions every B2B software company has always faced: licensing model, packaging, pricing model, in that order, separated cleanly so the architecture stays observable. If your team is reading the Atlassian and HubSpot moves and feeling pressure to “do something about AI pricing,” the first thing to do is name which layer is actually under pressure. We help pricing teams work through that decomposition every week. If you are sitting on an AI metric decision and the conversation in your organization keeps collapsing into “should we charge per use or per outcome,” see our approach or book a working session.

FAQs

Ready for profitable growth?

Hit the ground running and learn how to fix your pricing.