Pricing Experiments Without Breaking Trust
Growth Systems
Pricing changes are some of the highest-leverage experiments a SaaS can run, but the wrong methodology destroys customer trust faster than any feature mistake. Grandfather, isolate, and wrap qualitative around the quant.
By Arjun Raghavan, Security & Systems Lead, BIPI · August 22, 2024 · 7 min read
A SaaS we work with raised list price 60% on a Tuesday morning, applied it to all customers including existing accounts on month-to-month plans, and lost 14% of MRR to churn within six weeks. The price increase was correct directionally. The execution torched two years of trust.
Pricing is the most powerful lever in B2B SaaS and the easiest one to break things with. Treat it like a load-bearing change, not a marketing decision.
Grandfathering is not optional, it's the deal
When customers signed up at a price, that's the deal you made. Changing it without grandfathering existing accounts is a unilateral renegotiation, and customers respond accordingly. The grandfathering period varies by contract terms, but the principle doesn't: existing customers keep their existing price for at least one full renewal cycle, with a clear notice window before any change.
- Annual contract customers: price holds through renewal, change communicated 60 days out
- Month-to-month: minimum 90 days notice with old price honored through that window
- Founders/early adopters often get permanent grandfathering as part of the original deal
- Communication is direct from leadership, not a transactional email
- Provide an alternative for customers who can't accept the new price
Test new pricing on new sign-ups only
The cleanest pricing experiment is one where existing customers see nothing change. Run new prices on new sign-ups (segment isolation by cohort), measure conversion rate, average contract value, and 90-day retention. Compare to the prior cohort baseline. Don't run pricing A/B tests where two simultaneous prospects see different prices for the same product. That gets discovered, screen-shotted, and posted.
Qualitative wrap-around prevents misreading the data
A pricing test that shows lower conversion at the higher price isn't necessarily a 'higher price doesn't work' result. It might be 'we lost the price-anchor segment but the customers who did buy are more qualified and retain better'. You only know which one is true if you're talking to the prospects who bounced and the ones who converted, ideally within two weeks of the decision.
Run 10-15 qualitative interviews with each side of every pricing test. Quantitative tells you what happened. Qualitative tells you why, and which mechanism is driving the result.
When raising prices works (and when it backfires)
Raising prices tends to work when you've genuinely added enterprise-grade capability, when your competitors have moved their list prices upward, when you're shifting from self-serve to sales-assisted motion, or when usage data shows you're under-monetizing power users. It backfires when it's purely opportunistic, when nothing about the product has changed, or when the increase is steep enough to feel punitive.
- Run a value-perception interview round before any change
- Test new pricing on a tightly scoped new-signup cohort
- Compare 90-day cohort revenue, not conversion rate alone
- Communicate any change to existing customers with executive sponsorship
- Honor previous pricing for at least one renewal cycle
Packaging changes have the same blast radius
Splitting features into a higher tier or moving a feature out of the plan a customer is on is a pricing change with extra steps. Treat it the same way: grandfather, communicate, provide alternatives. The number of times we've seen 'this used to be in your plan but is now in our Pro tier' surprise an existing customer is uncomfortable.
What to actually measure
Headline conversion rate is the easiest metric and the wrong one. The metric that matters is 90-day cohort revenue at the new price minus 90-day cohort revenue at the old price, normalized for traffic. Often the conversion rate drops 15% but the surviving cohort has 40% higher ACV and equivalent retention. Net: you made money. The dashboard everyone watches will look red for a quarter while reality looks green.
Build the right metric before you start, communicate it to the team, and don't panic when the leading indicator flashes. Pricing tests need 90 days minimum to read clean.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.