
Frameworks, core principles and top case studies for SaaS pricing, learnt and refined over 28+ years of SaaS-monetization experience.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Join companies like Zoom, DocuSign, and Twilio using our systematic pricing approach to increase revenue by 12-40% year-over-year.
Figuring out what to charge for—and what to keep free—is one of the highest-stakes decisions open core companies face. Get the free-to-paid boundary wrong, and you either leave significant revenue on the table or alienate the community that drives your growth. The solution isn't guesswork or copying competitors. It's systematic open core experiments that validate your monetization assumptions before you commit.
Quick Answer: Open core companies should prioritize three foundational pricing experiments first: (1) feature gating tests to identify which capabilities users will pay for beyond core OSS, (2) usage threshold experiments to find optimal conversion triggers, and (3) support/SLA tier validation to test willingness-to-pay for enterprise-grade service levels.
This guide gives you the tactical playbook to run these tests—starting Monday if you're ready.
Open core pricing isn't like traditional SaaS pricing. You're navigating a unique balancing act: monetize too aggressively, and you fracture community trust. Monetize too conservatively, and you can't sustain the business that funds development.
The cost of getting the free/paid split wrong compounds over time. Once you've trained users to expect a feature for free, moving it behind a paywall triggers backlash—as Elastic learned when their licensing changes sparked community controversy and competitor forks. Testing paid vs free features before you commit reduces this irreversibility risk.
Unlike closed-source SaaS where you control all distribution, your open source users have alternatives. They can fork, switch to competitors, or simply stay on free tiers indefinitely. This means your experiments need to identify genuine willingness-to-pay, not just theoretical interest.
Feature gating tests answer the fundamental question: which capabilities will users actually pay for? The methodology is straightforward but requires discipline.
How to run it: Expose a subset of new or beta features to different user cohorts with different access levels. Track not just who clicks "upgrade" but who actively uses the gated feature and whether that usage correlates with conversion.
GitLab's approach offers a useful model. They've systematically tested which features belong in Free, Premium, and Ultimate tiers by analyzing usage patterns against conversion data. Features like advanced CI/CD capabilities and security scanning ended up in paid tiers after validation showed enterprise buyers specifically sought them.
The key is measuring conversion impact without alienating your community. Run experiments on genuinely new capabilities rather than restricting previously-free features. This preserves trust while gathering clean data.
Two feature categories consistently show the highest conversion potential for open source monetization:
Enterprise features: SSO/SAML, audit logging, role-based access control, compliance certifications. These solve procurement and IT requirements, not developer problems—making them natural paid differentiators that don't frustrate individual contributors.
Developer productivity multipliers: Advanced analytics, performance optimization tools, team collaboration features. These create value at scale, meaning individual developers may not need them, but teams managing production systems will pay.
Test both categories, but expect enterprise features to convert faster with higher willingness-to-pay.
Beyond feature gating, usage-based triggers offer powerful conversion opportunities. The question isn't what to gate, but when users should hit a paywall.
What to test:
MongoDB's Atlas model demonstrates effective threshold testing. Their free tier includes generous limits for development and small projects, with clear usage-based triggers for production workloads. This ensures hobbyists and learners stay happy while production users naturally convert.
Run A/B tests with different threshold levels across cohorts. A common finding: thresholds set 20-30% below where "serious" usage begins tend to optimize conversion without frustrating legitimate free users.
Open core experiments shouldn't focus exclusively on features. Support and SLA commitments often represent the cleanest monetization path because they don't restrict product functionality.
What to test:
This approach sidesteps community concerns entirely. No one expects enterprise-grade SLAs for free. Test different support tier configurations with pricing attached, measuring both conversion rates and willingness-to-pay at each level.
You don't need sophisticated tooling to start. A minimal viable experimentation setup includes:
Tracking requirements:
Sample size guidance: For statistically significant results, aim for at least 1,000 users per cohort for conversion rate experiments. If your user base is smaller, extend test duration rather than accepting noisy data.
Test duration: Run each experiment for minimum 4-6 weeks. Open source users often have longer consideration cycles than typical SaaS buyers, especially for infrastructure software.
For each open core experiment, measure:
1. Conversion rate by cohort: Segment by company size, usage level, and acquisition channel. Enterprise users from your website may behave entirely differently than developers who found you on GitHub.
2. Community sentiment and contribution impact: Monitor GitHub issues, Discord/Slack sentiment, and—critically—contribution rates. A 10% conversion lift means nothing if it triggers a 40% drop in community contributions.
3. Expansion revenue potential: Track not just initial conversion but subsequent upgrade behavior. Some features may convert fewer users initially but those users expand faster.
Testing too many variables: Run one experiment at a time. If you simultaneously test feature gating and usage thresholds and support tiers, you won't know what caused conversion changes.
Ignoring community feedback loops: Quantitative data tells you what happened. Community discussions tell you why—and whether backlash is brewing. Monitor both.
Premature optimization: Don't A/B test button colors when you haven't validated your core free/paid split. Sequence experiments from highest-uncertainty to lowest.
The "what if our community revolts?" concern is legitimate. Two strategies mitigate this risk:
Transparency: Consider announcing that you're experimenting with pricing. Frame it as "we're figuring out how to build a sustainable business that funds ongoing development." Many communities respect this honesty more than discovering experiments through observation.
Reversibility planning: Before launching any experiment, define your rollback criteria. If community sentiment drops below a threshold or contribution rates decline by X%, commit to reversing course and communicating what you learned.
Ready to prioritize your first experiments? Download our Open Core Pricing Experiment Scorecard—a decision framework helping you prioritize which tests to run based on your stage, product maturity, and community size.

Join companies like Zoom, DocuSign, and Twilio using our systematic pricing approach to increase revenue by 12-40% year-over-year.