Margins

The Economics of AI-First B2B SaaS in 2026

Date Icon
Oct 21, 2025

Introduction:
The rise of AI-first software-as-a-service (SaaS) companies has upended many of the assumptions that defined the economics of earlier SaaS businesses. Traditional B2B SaaS companies have long enjoyed sky-high gross margins – often 80% to 90% – because once the software was built, serving each additional customer was very cheap. In classic SaaS, the cost of servicing an extra user (mainly cloud hosting and support) is minimal, so each new subscription’s revenue is largely pure profit. AI-first SaaS, however, is rewriting this story. These new products embed powerful AI/ML models at their core – for example, offering code generation, content creation, or predictive analytics – and that comes with significant ongoing costs for model development and especially for running (inferencing) the AI models on each use. The result is that AI-native SaaS companies often have dramatically lower gross margins than their predecessors, at least in their early years. In this post, we’ll explore how the economics and gross margins of AI-first B2B SaaS differ from traditional SaaS models, using real examples (OpenAI, GitHub Copilot, Jasper, Notion AI, Salesforce Einstein, etc.), and discuss what this means for unit economics, pricing strategy, and long-term business models.

From Traditional SaaS to AI-First SaaS: A Margin Comparison

Traditional SaaS Economics: Classic B2B SaaS businesses turned software into a subscription service with very high gross margins. Once the software was developed, the cost of delivering it to one more customer was trivial – basically some server time and customer support. Many top SaaS companies report gross margins in the 70–90% range. In other words, $0.70-$0.90 of every $1 of revenue is gross profit, making SaaS one of the most profitable business models. For example, Salesforce, a quintessential B2B SaaS company, has gross margins around 77%. The key reason is that SaaS has near-zero marginal costs per user – hosting and bandwidth costs scale sublinearly, and one support team can handle many customers. This means as a traditional SaaS scales, margins often improve or stay high because revenue far outpaces any incremental costs. High gross margin gave earlier SaaS companies plenty of room to invest in sales, marketing, and R&D while still eventually becoming very profitable.

AI-First SaaS Economics: AI-driven SaaS flips some of these assumptions. In an AI-first product, each user action (like generating text, running a query, or calling an AI feature) may trigger a computationally intensive AI model. That translates to a direct, variable cost for the company every time the product is used. Serving additional customers does not drop nearly to zero marginal cost – in fact, expenses scale roughly in proportion to usage. As one VC firm put it, “every new customer who actively uses your [AI] product increases your infrastructure costs proportionally,” a very different dynamic from traditional SaaS. This means AI-centric SaaS startups often have gross margins far below the SaaS norm. Recent benchmarks show many AI software companies averaging only about 50-60% gross margin, versus 80-90% for traditional SaaS. Hyper-growth “AI supernova” startups in particular have been seen with gross margins as low as ~25% early on (some even negative gross margin at times – essentially selling below cost to fuel growth), whereas more mature “AI shooting stars” stabilize closer to ~60%. Seeing negative gross margins in software is extremely rare historically, yet it has been observed among AI-first applications that rely heavily on costly model API calls.

To put this in perspective, OpenAI, a leading AI platform (not a traditional SaaS but selling AI via API and services), has been estimated to run around 50% gross margin on its operations. Anthropic (another AI model provider) is in a similar ballpark ~60% gross margin by some reports. Those figures are far below a pure software product, and importantly they exclude the huge up-front training costs, which are usually treated as R&D expense rather than cost of goods sold. At the application layer, the picture can be even more extreme. An analysis by Bessemer Venture Partners found that a cohort of fast-scaling AI SaaS startups had only ~25% gross margin on average in early stages, while even steadier-growth AI companies managed around 60% gross margin – both well under typical SaaS. In fact, several AI startups have gross margins so low it looks more like an infrastructure or hardware business than software. One popular meme in the industry described an AI coding assistant startup hitting “$100M ARR with a $120M [model provider] bill,” highlighting how usage costs can exceed revenue if pricing isn’t carefully aligned.

Real-World Examples: The challenges aren’t just theoretical. Consider GitHub Copilot, an AI pair-programmer sold to developers. Priced at about $10 per user per month, Copilot initially offered essentially unlimited AI code completions. But those completions call large GPT models under the hood, which aren’t cheap. Reports emerged that Copilot was costing Microsoft (GitHub’s parent) up to $80 per user per month in compute/model fees for heavy users, averaging a ~$20 loss per user in early 2023. In other words, for each $10 subscriber, Microsoft was eating perhaps $30 of cost on average, and much more for power users. 

This obviously drags gross margins deeply negative – an untenable situation long-term. Microsoft has since started adjusting Copilot’s model (introducing a 2-tier “Pro” plan and limits) and even pricing for usage above certain caps. By mid-2025, GitHub announced that the formerly “unlimited” Copilot would include a generous allowance of AI requests, but beyond that, customers will pay usage fees (e.g. $0.04 per extra request). This shift from all-you-can-eat to usage-based pricing is a direct response to the economic reality: the old SaaS notion of a flat per-seat fee doesn’t work when some users might consume 100x more AI compute than others. 

Similarly, AI writing assistant Jasper initially built on OpenAI’s models, has said their “gross margins are fine” for now using OpenAI’s API, but they see major cost savings potential by running their own models in the future to improve margins and control performance[1]

Replit, a developer platform that launched an AI coding bot, saw its revenue rocket from ~$2M ARR to $144M ARR in a year by 2025 – but only by moving to usage-based plans (for heavy cogs products, at Monetizely we recommend come for of usage based pricing) could it lift gross margin from single-digits into the ~20-30% range. (At one point in 2024, Replit’s gross margin was reportedly under 10%, even dipping negative during a usage surge, before pricing changes brought it back into the 20-30% range[2].) These examples underscore how AI features come with significant ongoing costs that simply didn’t exist for yesterday’s SaaS products.

Cursor IDE - A compelling new case study for AI coding economics: Cursor made significant pricing changes in mid-2025, moving from request-based limits to a compute credit pool system. Their Pro plan ($20/month) now includes $20 of frontier model usage at API pricing, with unlimited access to Tab completions and Auto mode. The pricing change was controversial enough that Cursor issued a public apology and offered refunds for unexpected charges, highlighting how difficult it is for AI-first companies to communicate complex cost structures to users.

Why AI Squeezes Gross Margins: Models, Inference, and Infrastructure

Several factors drive the margin pressure in AI-first SaaS, all related to the added cost structure of developing and running AI models:

  • Expensive Model Development (R&D) – Training advanced AI models is enormously costly. While traditional SaaS R&D mostly involves paying engineers to write code, AI startups must also budget millions for computing to train models (or fine-tune existing ones) on massive datasets. Training a state-of-the-art language model can run into the millions of dollars in cloud GPU time. These training costs usually hit the income statement as R&D (or occasionally capitalized), not directly as cost of goods, so they don’t always show up in “gross margin.” Nonetheless, they affect the overall economics and require AI companies to raise more capital up front. For example, OpenAI reportedly spent over $500M training GPT-4. Even if those training costs aren’t in COGS, they raise the bar for how much revenue is needed to achieve profitability. AI-first SaaS firms also employ more AI researchers, data scientists, and MLOps engineers, which drives R&D spend higher as a percentage of revenue than in a typical SaaS. In short, building the “smart” product is just more expensive than building a standard cloud app. While hefty R&D doesn’t reduce gross margin per se, it does mean operating margins will lag unless gross profit can eventually catch up.
  • Inference Costs (Serving the Model) – Unlike traditional software, an AI SaaS product incurs a direct expense each time it’s used. Inference is the process of running the trained model to get a result, and it can be very compute-intensive. Large language models (LLMs) like GPT-4 may require powerful GPUs or specialized hardware to produce an answer, consuming electricity and cloud compute resources. If you integrate such an AI into your app, every user prompt or action might cost a few cents in cloud compute. Multiply that by thousands of prompts, and you have a substantial cost line item tied strictly to usage. This is fundamentally different from classic SaaS, where one extra user might only cost a few fractions of a penny in server time. An industry analysis noted that AI startups cannot treat inference costs as fixed overhead – they rise with each user and each action, so they belong in COGS (cost of goods sold) as a true variable cost. Some startups initially mis-modeled this, treating AI compute as an “operational expense,” and were caught off guard when usage scaled and expenses scaled right alongside it. For instance, one fintech AI chatbot found that each enterprise client was burning $400 of compute costs per day to serve. If that client was paying a fixed monthly fee, the math would break quickly. In coding assistants, quality often demands using the most advanced model on every request, meaning the app’s COGS effectively follow someone else’s pricing. As one commentator wryly noted, an AI company’s “COGS rides someone else’s price card” (the AI model provider). That’s a precarious spot: if OpenAI or Anthropic raise API prices or if your users simply use more tokens, your gross margin suffers immediately.
  • Infrastructure and Cloud Expenses – AI-first companies rely on cutting-edge infrastructure that’s pricier than the typical cloud setup. Hosting a custom model might require renting GPU instances at high hourly rates, using specialized databases, vector search indices, and handling much larger volumes of data. Even if using third-party models via API, those API fees essentially include the provider’s hefty infra costs (and profit margin). For companies that host models themselves, running GPU servers at scale (plus the electricity and cooling) can significantly increase cost of revenue. Notably, big cloud providers themselves have cited AI workloads as a gross margin headwind. Microsoft, for example, noted that its Azure cloud gross margin was pressured by the ramp-up of AI infrastructure (Microsoft’s cloud gross margin fell as AI usage grew, landing around 69%). Smaller SaaS startups don’t own datacenters and must pay retail cloud prices. Startups often begin by using free or discounted credits (AWS, Azure, etc. offer compute credits to attract AI projects), but those eventually run out. When they do, the true infrastructure costs hit P&L hard, sometimes revealing that serving customers at market rates would be far less profitable than the rosy, credit-subsidized early figures suggested. This heavy reliance on cloud vendors also means AI companies are at the mercy of those vendors’ pricing and efficiency. The Jevons Paradox concept has been invoked here: as AI models or hardware get more efficient, usage tends to increase so much that total spend keeps rising, devouring those efficiency gains. In other words, even if the cost per AI inference drops 80% year-over-year (and indeed some model API prices fell dramatically), users then make far more queries, often leaving the AI provider spending just as much or more overall. This dynamic can trap AI-first SaaS in a cycle of high costs unless they continually optimize and control usage.
  • Third-Party Model Costs & Dependence – Many B2B SaaS startups bootstrap their AI features by calling an external AI service (like OpenAI’s API). While this is great for quick go-to-market, it means each API call has a direct hard cost. If your product, say Notion AI (which assists document writing in Notion), is built on OpenAI’s GPT-4, then a portion of your revenue goes straight to OpenAI. Gross margin in this scenario might hover in the 50-60% range even with decent pricing. The risk is that you don’t control those costs – the API provider could change pricing or usage limits. This happened in early coding assistant products: several companies found themselves essentially reselling OpenAI or Anthropic’s model output with only a thin markup, hence their margins were very low[3]. Over time, many realize they need to either negotiate better rates, fine-tune cheaper models, or build proprietary models to improve margins. Jasper’s team, for example, indicated they might train or host models themselves eventually to cut out some of the API middleman costs[1]. Owning more of the stack (like developing a smaller custom model tailored to your use case) can raise gross margin – but it requires upfront investment and expertise. Meanwhile, big players like Salesforce have the scale to train and run their own models (Einstein GPT), but even Salesforce often charges extra for AI features to offset those costs. In Salesforce’s case, Einstein AI capabilities are sold as add-ons (e.g. $50-$150 per user/month for Sales or Service Cloud AI addons)[4], rather than simply included “for free,” which hints at the real costs behind the scenes and the need to monetize them.

The net effect of these factors is that AI-first SaaS businesses operate with a cost of goods profile closer to an “infrastructure” business or cloud service than a pure software business. Instead of 10-20% of revenue going to COGS (as in a typical SaaS), an AI SaaS might see 40-50% (or more) of revenue eaten by COGS in the form of model hosting, inference compute, and data costs. This fundamentally shifts the unit economics and how such businesses must be managed.

How AI Usage Changes Unit Economics (Beyond Gross Margin)

Beyond gross margins alone, the infusion of AI into SaaS alters other components of unit economics and the P&L structure:

  • Customer Support and Service – AI-first products can both reduce and increase support costs in different ways. On one hand, AI automation can handle many routine tier-1 support queries or in-product guidance, potentially allowing startups to serve customers with a leaner support team (a cost savings compared to traditional SaaS). For example, an AI-powered onboarding or Q&A bot in a B2B software tool might deflect many questions that would otherwise require a human representative. This improves scalability of support without linear cost growth. However, if the AI feature is mission-critical, customers might demand more hand-holding initially (AI can be unpredictable, which could generate additional questions or concerns). Also, monitoring AI outputs for quality (accuracy, bias, etc.) becomes a new kind of “support” cost – sometimes companies even have humans in the loop checking AI outputs for high-stakes use cases (as noted, one healthcare AI had to spend $1.20 per interaction on human verification of the AI’s answers!). On balance, many AI SaaS startups try to leverage AI internally to keep support and customer success costs low relative to their user base, which can partially offset the high COGS. If successful, some AI companies operate with lower Sales & Marketing or support expense as a % of revenue, leaning on viral adoption and product-led growth. This was observed in the “supernova” AI startups that, despite low gross margins, had extremely high revenue per employee – indicating product virality and low incremental sales/support costs. In other words, they traded gross margin for faster growth and lighter sales expense, which can be a viable trade-off if it leads to market dominance.
  • R&D and Continuous Model Improvement – Traditional SaaS certainly incurs R&D costs for new features, but AI-first SaaS often treat model improvement as an ongoing core expense. The AI models might need frequent updating (retraining on new data, fine-tuning to customer-specific data, improving prompts), which is a bit like continuously refining your “engine”. The business may also need to invest in ML Ops, data labeling, and experimentation to keep the model’s performance high. For example, OpenAI’s and Anthropic’s teams are constantly tuning models and releasing updates – an expense that never really ends and must be recouped. For an AI SaaS offering (say, a marketing copy generator like Jasper or an AI CRM assistant like Salesforce Einstein), part of the value proposition is the AI gets smarter over time. That implies ongoing R&D spend not just for new features, but for core algorithm quality. We might compare this to the early days of SaaS where the biggest R&D investment was upfront building the product; in AI SaaS, significant R&D continues post-launch to maintain a competitive model. This ongoing R&D burden means AI companies might run lower operating margins or need to charge more to fund it. The flip side is that an AI feature can sometimes replace or augment human-intensive processes – e.g. instead of hiring more analysts, a customer might pay for your AI insight tool – which is where the value justification for higher pricing comes in.
  • Scalability and Unit Cost Behavior – In traditional software, unit economics improve with scale: fixed costs are spread out, and variable costs per user often decline (thanks to cloud economies of scale and multi-tenancy). However, for AI-centric businesses, scaling usage doesn’t inherently improve unit costs. In fact, more usage can worsen unit economics if pricing isn’t aligned, because each action incurs cost. This is why we see situations like Anthropic’s early pricing of an unlimited plan leading to huge losses on power users. Many AI SaaS firms have had to rethink their scaling model: some introduced usage caps or “fair use” policies to prevent a small minority of heavy users from gobbling disproportionate resources. Others simply moved to metered billing, ensuring revenue scales with usage. Over time, as AI infrastructure matures, there is hope that cost per inference will decline (via better algorithms, cheaper hardware, larger scale operations). Indeed, certain model inference costs have reportedly fallen by 80-90% per year for equivalent tasks. But real-world unit economics haven’t improved as dramatically, because companies often respond by using larger models or serving more complex tasks as expectations rise. One crucial lever for improving unit economics is model optimization: for example, routing 80% of user requests to a cheaper smaller model and only using the expensive top-tier model for the hardest 20% of cases. Vendors that can implement this smart routing can drastically improve gross margin while still meeting user needs. Another approach is fine-tuning or training smaller models that are just good enough for the domain – thereby cutting per-call cost. Over time, if an AI SaaS achieves scale, it might even build out its own infrastructure (as OpenAI has via Azure supercomputer deals, or as mid-size players do with GPU leases) to lower unit costs by not paying a third-party’s margins. So, there is a path to better margins as an AI business matures, but it requires conscious effort: economies of scale are not automatic as they were in classic SaaS; they must be earned through technical and strategic choices.
  • Impact on Pricing Power and Value Delivered – The economic model of AI SaaS also influences how customers perceive value and how much you can charge. If your AI product genuinely replaces significant labor or drives major revenue lift for clients, you can potentially charge a premium or usage-based fees that cover your costs and then some. For instance, Microsoft is pricing its new Copilot for Microsoft 365 (an AI assistant across Office apps) at $30/user/month – far higher than a normal Office license – because the value of AI-generated productivity is perceived to be worth that. This indicates that AI features can be monetized separately at a premium, helping preserve or even boost overall gross margin if done right. Customers will pay more if they see clear ROI (e.g., an AI sales email generator that helps close deals might justify high usage-based fees). On the other hand, if the AI output is seen as a commodity or easily replicated, prices (and margins) could face downward pressure over time, especially as open-source models proliferate. B2B buyers also demand predictability – enterprise clients often prefer fixed pricing or committed spend, which clashes with the variable-cost nature of AI. AI-first providers thus have to carefully design pricing to balance predictable billing for customers with cost recovery for themselves. Some have introduced credit systems or tiered usage bundles to give customers a sense of certainty (e.g., “this plan includes X AI credits per month”) while still charging more if those credits are exceeded. All these factors mean AI SaaS companies must be more nimble in pricing and packaging to ensure each customer is profitable.

Evolving Gross Margins: Can AI SaaS Improve Profitability Over Time?

A critical question for executives and investors is whether AI-first SaaS businesses can eventually approach the healthy margins of traditional software, or if they’ll permanently run “hotter” (with lower margins). The answer is evolving, but several trends suggest gross margins can improve over time – albeit with effort:

  • Optimizing Infrastructure and Model Efficiency: As AI startups grow, they gain opportunities to optimize costs. This might mean moving from paying retail cloud rates to negotiating better volume discounts or building custom GPU clusters. Owning more of the inference stack can provide leverage. We’ve seen companies start on OpenAI’s API (for speed) but later switch some workloads to fine-tuned open-source models they host themselves, cutting per-unit costs. There’s evidence that inference costs for a given level of model performance drop significantly each year due to algorithmic improvements and hardware advances. Leading AI SaaS firms will eagerly adopt these advances – for example, using a more efficient model that gives the same quality at half the cost. If handled well, these efficiency gains can flow through to better gross margins (provided you don’t simultaneously double the context size or model usage everywhere, the Jevons paradox caveat). Startups that survived the initial low-margin phase often report improving margins as they tweak their infrastructure. For instance, some coding AI tools implemented caching (not re-generating identical answers repeatedly) and model distillation (using cheaper models where possible), which helped push their gross margins upward over time towards more sustainable levels.
  • Product Mix and Value-Add Services: Many AI-first companies discover that they can add higher-margin services or features around the core AI to improve overall economics. Remember, earlier SaaS achieved 80%+ margins despite relying on third-party cloud and database infrastructure by building substantial unique app functionality and “workflow depth” on top. AI startups are doing similar things: layering in features like collaboration tools, integrations, analytics dashboards, or human-in-the-loop validation. These features don’t incur heavy AI costs but can justify higher pricing tiers, thereby boosting blended margins. As one investor noted, the key is making customers see your product as more than “just a thin wrapper over OpenAI”. If the AI is deeply integrated into a workflow with domain-specific features, the customer pays for the solution as a whole, and the AI usage cost might only be, say, 20% of that overall value. Notion AI, for example, is not sold as a standalone GPT-3 text generator; it’s an add-on to Notion’s workspace with seamless integration. The price (~$8-$10/user) is set such that typical usage is covered with ample margin. Over time, as the novelty of AI fades, successful products will be those that combine AI with proprietary data or workflow lock-in – enabling them to charge for outcomes, not just per token. That naturally improves gross margin because revenue grows beyond just reselling API calls. In addition, companies are finding ancillary revenue streams that are high-margin: for example, Replit not only charges for AI assistance but also monetizes cloud compute, hosting, and a developer marketplace – revenues that are not tied to AI inference costs. These diversify the model and can raise overall gross profit as a percentage of sales.
  • Pricing Adjustments and Tiering: Gross margin tends to improve when pricing better reflects underlying costs. Early on, many AI SaaS underpriced or offered flat rates that didn’t scale well. Over the past year, there’s been a broad shift to more nuanced pricing which should, in theory, lift gross margins. A 2025 industry report found 92% of AI software companies now use mixed pricing models – combining subscriptions with usage fees, or offering different tiers for heavy usage – precisely to tackle the margin issue. Hybrid pricing allows a base fee to cover fixed costs and small usage, while heavy users pay extra. We’ve already discussed how GitHub, Replit, Anthropic and others instituted usage-based components after seeing margins dive. These changes often result in an initial hiccup (some users complain or churn), but ultimately they set the business on track for healthier unit economics. For example, when Anthropic realized a single $200/month customer was racking up tens of thousands in compute cost, they wisely ended the unlimited plan. Those who needed massive usage would have to pay accordingly or throttle back. Going forward, we’ll likely see AI-first SaaS gross margins improve as more companies “get pricing right” – e.g. charging per document analyzed, per 1,000 tokens generated, or per outcome achieved, rather than an unlimited buffet. One striking metric: AI product companies that stuck rigidly to old-school per-seat pricing saw gross margins about 40% lower on average than those that adopted usage or outcome-based pricing. Clearly, pricing strategy is intertwined with margin evolution.
  • Scaling Volume and Negotiating Power: If an AI SaaS achieves true scale (say, tens or hundreds of millions in revenue), it gains some bargaining power and scale efficiencies. This might include bulk discounts on cloud GPUs, or the ability to amortize the fixed cost of building an optimized model across a large user base. For instance, OpenAI’s API prices have come down significantly (GPT-4’s price per 1K tokens dropped by 83% from launch to late 2023), partly due to scale and optimizations – those savings can be passed to companies using the API, or simply improve their margin if pricing to customers stays the same. Additionally, big enterprise clients might be willing to commit to large contracts for an AI solution if you prove value, allowing better capacity planning and cost control on the provider side. Over time, AI-first providers may also invest in purpose-built hardware or more efficient model architectures that sharply reduce the cost per query. We are in early innings of optimization; it’s reasonable to expect that serving a given AI task in 2028 will cost, say, one-tenth of what it does in 2025. Therefore, a company that can survive on 50% gross margin today could see margin expand toward 70%+ in a few years as cost per unit falls – if they keep their pricing power.

All told, gross margins for AI-first businesses likely start lower and improve gradually. Many startups in 2023-2024 accepted low or even negative gross margins to acquire users and train their models (much like an “investment phase”). But investors and founders are laser-focused now on closing that gap. We’re already seeing a course-correction: by 2025, even fast-growing AI SaaS firms are targeting moving from, say, 30% to 60% gross margin by employing the tactics above. They may never routinely hit 85-90% like the leanest traditional SaaS did, but settling in the 60-70% range at scale is a reasonable goal for many – essentially, closer to a cloud services company than to an old software company, but with much higher growth potential. Executives should plan for a margin profile that’s a hybrid: not as low as pure infrastructure (e.g. public cloud gross margins in the 50%-ish range), but not as high as pure software.

Strategic Implications: Rethinking Pricing, Bundling, and Monetization

The economic differences of AI-first SaaS have profound strategic implications for how these products are priced and sold in the B2B realm:

1. Embrace Usage-Based and Value-Based Pricing: The traditional SaaS playbook of per-seat (per-user) pricing is being upended in AI products. The reason is twofold: costs are usage-driven, and value delivered is often usage-driven too. Charging a fixed $X/user/month when usage (and cost) can vary wildly is a recipe for margin erosion and possibly customer frustration (if you have to impose hidden usage limits). Instead, many AI SaaS are moving to usage-based models or hybrid pricing. According to industry data, the share of companies using pure seat-based pricing is rapidly shrinking, while hybrid pricing (base fee + usage) jumped from 27% to 41% of companies within a year. We see real examples: Snowflake popularized consumption pricing in the data cloud space, and now AI startups follow suit with token-based billing, credit systems, or output-based pricing. For instance, an AI email generator might charge by emails composed or by the number of prospects contacted. The key is to tie pricing to the actual value or workload. A rule of thumb: if your AI product lets one user do the work of five, charging per-seat fails to capture that value. Instead, charge per amount of work done (documents created, tickets resolved, code generated, etc.). This not only aligns revenue with costs better, but customers find it fairer since they pay for what they use. Of course, pure pay-as-you-go can make revenue lumpy, so many offer blended models (e.g. a monthly platform fee that includes some usage, then pay-as-you-go for excess). The data strongly suggests that getting pricing right improves both margins and retention: one study found AI providers using modern usage-based pricing had 40% higher gross margins and significantly lower churn than those sticking to old models.

2. Introduce Tiered Plans and Bundles Thoughtfully: Given the variability in user consumption, it’s wise to create tiers or bundles that segment users by their usage needs. Many B2B AI SaaS now offer something like: Standard Plan (for typical usage, with a fair-use cap) vs Enterprise Plan (higher or unlimited usage but at a premium price or with overage charges). This helps prevent heavy users from diluting margins on lower plans. Bundling AI features as premium add-ons can also be effective. For example, Notion offers its AI features as an add-on subscription per user, rather than giving it to all users by default – meaning only those who value it (and presumably will use it a lot) pay for it, covering the cost. Salesforce Einstein is another classic case: Salesforce didn’t just roll all AI predictions into its base product for free. It offers Einstein capabilities as separate packages or included only in high-end editions (like the Unlimited tier) which are far more expensive[4][5]. This ensures that the substantial compute cost of, say, running AI to score leads or answer service queries is funded by the extra fees customers pay for Einstein. Bundling can also involve mixing AI features with non-AI ones to hide some of the cost – e.g., bundle an AI feature with premium support or other tools in a higher tier. The additional revenue from that tier isn’t solely paying for AI usage; it also covers intangible value like priority service, making the margin economics work out better. The overarching strategy is to monetize AI separately where possible: treat it as a value-add that customers opt into, so its costs (and some profit) are directly recouped, rather than a freebie that secretly eats into margins.

3. Monitor and Communicate Value to Justify Pricing: With AI features commanding higher prices or usage fees, B2B vendors must be prepared to demonstrate ROI to customers. This is classic in enterprise sales but takes a new flavor with AI. For instance, if you charge per AI conversation or per 1,000 tokens, business buyers will ask “what am I getting from those tokens?” Successful AI SaaS companies focus on business outcomes – e.g., “Our AI coding assistant saves your developers 30% of coding time” or “Our AI customer service bot resolves 50% of tickets without human intervention.” These outcomes justify the bills. Strategically, companies might offer dashboards or reports that quantify the AI’s impact (e.g., hours saved, leads generated) to defend their monetization. It’s also wise to provide cost transparency tools to customers when using consumption pricing. No IT manager wants a surprise six-figure bill because usage spiked. So, features like usage meters, alerts, and predictable spend caps become part of the product offering. By giving enterprise customers control or at least visibility (“here’s how many AI credits you used this month”), you build trust in the pricing model and reduce pushback. This transparency, combined with a clear value narrative, allows for price increases or upsells over time without as much resistance.

4. Invest in Cost Management and Efficiency as Strategic Priorities: In traditional SaaS, gross margin management was often an afterthought – when margins are 85%, it’s not the top worry. In AI SaaS, cost management is a strategic imperative from day one. Smart AI startups now build cost modeling into product design. Before launching, they simulate how much each user action will cost in terms of tokens, memory, etc., and price accordingly. Executives should prioritize engineering work on cost optimizations: e.g., implementing request batching, caching, model distillation, choosing the cheapest model that meets quality, and so on. This is akin to how cloud infrastructure companies operate – very cost-aware at the architecture level. Some are even exposing cost controls to users (for example, letting a customer choose a cheaper/slower model vs a pricey/accurate one for certain tasks). A culture of cost-consciousness helps ensure gross margins improve. As noted in one analysis, lack of cost visibility can torpedo margins, so top performers implement real-time cost dashboards, monitoring of per-user compute consumption, and alerts for anomalous usage. This data can inform not just engineering tweaks but also when to approach a customer about moving to a higher plan if their usage is skyrocketing. In short, treat your AI costs like COGS and manage them actively, just as a manufacturing business would manage input costs. This might be new to SaaS execs, but it’s crucial in AI.

5. Leverage AI to Reduce Other Costs: It’s worth noting that AI-first companies can themselves use AI to streamline operations. Many are using their own tech (or others’) to automate repetitive tasks in marketing, sales outreach, coding, and support. For example, an AI SaaS with a small support team might deploy an AI chatbot fine-tuned on its docs to handle 24/7 customer queries, reducing the need for a large support staff. Likewise, AI can generate marketing content or help qualify leads, making the Sales & Marketing spend more efficient. While these savings don’t directly raise gross margin (they improve operating margin), they offset the lower gross margin to some extent. The end goal for any business is healthy net margins. So, a company might accept a gross margin of 60% (vs 80% in old SaaS) if it can operate with leaner overhead – say, 15% of revenue on S&M instead of 40% – due to product-led growth and AI-driven efficiency in go-to-market. This was observed in some of the super-fast-growing AI startups which had 4-5x higher revenue per employee than typical SaaS. They spent far less on sales, since the product’s AI wow factor drove viral adoption, and that partially compensates for the high cost of revenue. Strategically, AI-first CEOs might pitch investors on a different model: lower gross margin but also lower customer acquisition cost (CAC) and lower support costs, yielding solid profits at scale. The mix of expenses across the P&L will differ from legacy SaaS, but the business can still be attractive if managed holistically.

6. Plan for Continuous Monetization Evolution: Finally, software execs should recognize that we’re in an unusually dynamic period for pricing norms. AI capabilities and costs are changing rapidly, so pricing and packaging will likely require continuous iteration. Many AI B2B companies have already changed their pricing model 2-3 times in their first year or two. Flexibility is key: be ready to introduce new tiers, adjust limits, or even explore alternative revenue streams (like advertising or marketplaces in some cases) to bolster margins. For instance, OpenAI has tested things like plugin commissions and sponsored content in ChatGPT to augment its revenue beyond just API fees. An AI SaaS serving enterprises might consider add-on professional services (at high margin) or data network access fees, etc. The strategic point is not to be stuck in a legacy SaaS mindset. AI businesses may end up looking more like a blend of SaaS, usage-based cloud service, and consulting at times. The ones who thrive will experiment to find what monetization mix yields both customer value and sustainable profits.

What Is Foreseen For 2026 and beyond

Per a great substack article by Jenny Xiao and Jay Zhao: https://leonisnewsletter.substack.com/p/the-state-of-ai-in-2025

We’re living through a period where AI is effectively subsidized. Even as inference becomes 50–100× cheaper every few years, prices remain below true economic cost, propped up by Big Tech, leading labs, and their backers. That won’t last forever.

AI’s unit economics diverge sharply from traditional SaaS. Classic SaaS achieves stellar margins because the incremental cost to serve another customer trends toward zero. AI, by contrast, carries ongoing variable costs, every inference, token, and task incurs spend. As providers move away from subsidy-driven pricing toward cost-reflective rates, this gap will become starker. Founders already report API bills compressing margins despite today’s discounts. If those inputs normalize to real costs, many apps may need to charge $200, or even $2,000 per month instead of $20 just to be sustainable.

Why the standard SaaS model strains under AI: shifting cost structures will force new pricing mechanics. Xiao and Zhao outline three phases of this evolution. In 2024’s “Premium AI” phase, advanced capabilities command premium paywalls (e.g., $20 ChatGPT-style tiers). Over the next two years, the “Enterprise Scale” phase should take hold, with pricing anchored to concrete actions and outcomes—better aligning fees with delivered value and underlying costs. By 2027–2029, the “AI Economy” phase emerges: autonomous agents operate as economic units, not merely tools. Systems will negotiate, trade, and generate value with minimal human oversight. In that world, the boundary between technology and economic activity dissolves. AI won’t just power the economy; in many contexts, it will be the economy.

Conclusion: Navigating the New Normal of AI-Driven SaaS Economics

For B2B software executives, the rise of AI-first SaaS demands a shift in thinking. Gross margins in this new generation of software are not automatically in the 80-90% comfort zone of yesteryear. Instead, leaders must proactively design their business models to manage much higher variable costs and prove that those costs are justified by equally high value to customers. The economics of AI SaaS often start out looking worse than traditional software – sometimes shockingly so, with gross margins in the 20-40% range early on. But with smart strategy, these can improve over time through infrastructure efficiency gains, better pricing alignment, and building more non-AI value around the core product. Gross margins will likely remain a bit lower than classic SaaS even at maturity (perhaps settling around 60-70% for healthy AI-first businesses), meaning such companies need to either run leaner in other areas or command premium pricing to make the bottom line work.

The strategic implications are clear: AI features should usually be monetized (not given away freely with cheap plans) because they carry a real cost. Pricing models must evolve toward usage-based or outcome-based schemes that capture the value created and cover the resources consumed. Bundling AI into premium packages and clearly articulating ROI will be key to persuading customers to pay more. Additionally, AI-first SaaS firms need to borrow disciplines from cloud and hardware businesses, tracking cost of goods and optimizing every percentage point through technology and scale.

In summary, the AI revolution is changing SaaS from a pure “set it and forget it” subscription model into something more complex – a blend of software, service, and ongoing compute-intensive capability. B2B SaaS executives should adjust their playbooks: success will come from those who can harness AI to deliver game-changing outcomes for customers and do so with unit economics that work in the long run. The companies that strike this balance will not only have happier customers, but also more defensible margins and profitable growth in the age of AI-powered SaaS.

Sources: The analysis above draws on industry research and examples, including venture reports and news: AI gross margin benchmarks, commentary on rising inference costs, and real-world cases like Microsoft’s GitHub Copilot economics and Salesforce’s AI pricing strategy[4]. These illustrate the broader trends reshaping SaaS economics in the AI era.

[1] How Jasper found product-market fit: pivoting to AI-native SaaS

https://www.unusual.vc/post/how-jasper-found-product-market-fit-pivoting-to-ai-native-saas

[2] People complaining about VCs subsidizing AI startups ... - Threads

https://www.threads.com/@ociubotaru/post/DNL7NOqy0md/people-complaining-about-vcs-subsidizing-ai-startups-here-are-the-gross-margins-?hl=en

[3] The State of AI Gross Margins in 2025 - by Tanay Jaipuria

https://www.tanayj.com/p/the-gross-margin-debate-in-ai

[4] How Much Does Salesforce Einstein Agent Cost? A Complete ...

https://www.getmonetizely.com/articles/how-much-does-salesforce-einstein-agent-cost-a-complete-pricing-breakdown

[5] Salesforce Sales Cloud Einstein AI: Features, Benefits & Pricing

https://cloudconsultings.com/salesforce-sales-cloud-einstein/

Get Started with Pricing Strategy Consulting

Join companies like Zoom, DocuSign, and Twilio using our systematic pricing approach to increase revenue by 12-40% year-over-year.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.