AI Pricing

The Subprime AI Crisis: How Ed Zitron's Warning Reveals the Path to Sustainable AISaaS Pricing

Date Icon
Oct 23, 2025

Executive Summary

The AI industry stands at a critical juncture in October 2025. While generative AI has captured global attention and billions in investment, a stark reality lurks beneath the surface: OpenAI generated $4.3 billion in revenue in the first half of 2025 but posted an operating loss of $7.8 billion during that same period. Only 5% of ChatGPT's 800 million users actually pay for the service. Tech critic Ed Zitron coined this phenomenon the "subprime AI crisis" in September 2024, drawing parallels to the 2007 financial collapse that shook the global economy.

This article examines Zitron's thesis one year later with updated 2025 data and provides a comprehensive framework for building sustainable AI SaaS businesses in an era of inflated valuations and unrealistic pricing models. The path forward requires fundamental rethinking of how AI services are priced, delivered, and scaled.

Part I: Understanding the Subprime AI Crisis

The Core Problem: Subsidized Pricing at Scale

Ed Zitron's central argument, first articulated in September 2024, is that we're experiencing a subprime AI crisis where thousands of companies have integrated generative AI at prices that are far from stable and even further from profitable. The comparison to subprime mortgages is apt: just as banks once offered loans at unsustainable rates that eventually collapsed, AI providers are offering computational services at prices that don't reflect their true costs.

The numbers have only gotten worse since Zitron's initial warning. In 2024, Anthropic was on course to lose $2.7 billion. By 2025, OpenAI's first-half results showed $4.3 billion in revenue against an operating loss of $7.8 billion. As of October 2025, OpenAI is projecting full-year revenue of $13 billion but continues to burn cash at an unprecedented rate, with CEO Sam Altman openly stating the company is "willing to run the loss for quite a while."

These companies operate on what Zitron called "faith-based economics" – generative AI doesn't run on money or cloud credits so much as it does on faith, and faith, like investor capital, is actually a finite resource.

The Hidden Cost Structure

The economics of AI are fundamentally different from traditional SaaS. As of 2025, high-performance GPUs and TPUs cost anywhere from $10,000 to $40,000 per unit, and that's just the beginning. Building a small-scale AI data center with limited processing capacity can cost between $10 million and $50 million, while large-scale facilities designed for extensive artificial intelligence workloads may exceed $500 million.

Cloud GPU pricing has shown some improvement – H100 GPUs have dropped from highs of $8/hour to a more competitive $2.85–$3.50/hour range as of October 2025 – but for enterprises running large-scale AI projects, these costs multiply rapidly. Training alone can be astronomical: OpenAI's compute bill for 2025 could hit $14 billion according to industry estimates.

But hardware is only part of the equation. Training a single AI model can draw serious power, with AI expected to lead to a 160% increase in data center power demand by 2030. A study by HBR found that training a single AI model can produce as much carbon emissions as five cars over their lifetime. Data centers are projected to require $6.7 trillion worldwide by 2030 to keep pace with the demand for compute power.

The Cascade Effect

When AI providers eventually face reality and raise prices to sustainable levels, the ripple effects will be devastating. Companies relying on AI infrastructure are already feeling the squeeze in 2025.

The cascade began in earnest in mid-2024. Anysphere, which uses Anthropic's models for its Cursor AI coding tool, imposed massive rate hikes on users in June 2024 despite having raised $900 million in funding, with users flooding Reddit with posts titled "Cursor's New Pricing Model Is Absolute Garbage."

By 2025, the situation has intensified:

  • Anthropic introduced weekly rate limits for Claude Code users in July 2025, affecting even premium subscribers
  • The company launched $100 and $200/month Max tiers in April 2025, a 5-10x increase from the $20 Pro tier
  • Salesforce's "Agentforce" product charges an egregious $2-a-conversation rate
  • Claude's API pricing shows massive premiums: Claude Opus 4 costs up to 50 times more for output than OpenAI's GPT-5 mini

This is what Zitron predicted in his September 2024 warning – companies releasing new products and features with wildly onerous rates that make even stalwart enterprise customers with budget to burn unable to justify the expense.

Part II: The Reality Check - Why Current AI Pricing Models Are Broken

The 2025 Pricing Wars

As of October 2025, the AI industry has entered a full-scale pricing war that validates Zitron's predictions. The competitive dynamics reveal the unsustainability of current models:

OpenAI vs. Anthropic:

  • GPT-5 (launched August 2025): $1.25/million input tokens, $10/million output tokens
  • Claude Opus 4: $15-20/million input tokens, $75-80/million output tokens
  • Result: Claude costs up to 50 times more for output than GPT-5's most affordable tier

This dramatic pricing disparity forces companies to choose between performance (Claude's superior coding capabilities) and cost (OpenAI's aggressive pricing). Neither company is profitable at these prices, creating a race to the bottom that benefits no one long-term.

The Fundamental Mismatch

Traditional SaaS operates on predictable economics: once software is built, the marginal cost of serving additional users is minimal. AI obliterates this model. Unlike traditional SaaS products, where adding users has minimal impact on operational costs, AI tools incur significant expenses with every single query.

This creates what one might call the "success paradox" – the more popular your AI service becomes, the more money you lose if pricing isn't properly calibrated.

The OpenAI Case Study

OpenAI's pricing strategy illuminates the problem. The ChatGPT Pro subscription, initially priced at $200 per month when launched in late 2024, generated unexpectedly high usage and turned what should have been a premium tier into a loss-making service. Even at $200 per month – a price point that would be considered premium for most SaaS products – the company couldn't cover the actual costs of heavy users.

The conversion rates tell the story: As of October 2025, only 5% of ChatGPT's 800 million monthly active users pay for any tier of the service. That's approximately 40 million paying users generating the bulk of OpenAI's consumer revenue, while 760 million free users continue to burn through computational resources.

OpenAI's financial trajectory from 2024 to 2025 shows the escalating crisis:

  • 2024: $3.7 billion revenue, $5 billion loss
  • H1 2025: $4.3 billion revenue, $7.8 billion operating loss
  • Projected 2025: $13 billion revenue, but potentially $14+ billion in losses if burn rates continue

The Infrastructure Reality

As of October 2025, the cloud price for H100 GPUs has dropped from highs of $8/hour in early 2024 to a more competitive $2.85–$3.50/hour range, reflecting increased supply, datacenter competition, and improved availability. Yet for enterprises running large-scale AI projects, these costs multiply rapidly.

According to "The State of AI Infrastructure at Scale 2024" report (which remains the most comprehensive study available), over 75% of organizations report GPU utilization below 70% at peak load, meaning vast amounts of expensive infrastructure sit idle. This inefficiency persists into 2025, with companies struggling to optimize their AI infrastructure investments.

Part III: Building Sustainable AI SaaS Pricing Models

1. Embrace True Cost Transparency

The first step toward sustainability is radical transparency about actual costs. Companies must:

  • Calculate Total Cost of Ownership: Include not just compute but energy, cooling, maintenance, and talent costs
  • Model Non-Linear Scaling: A model that costs $50 per day to serve 1,000 predictions may not simply cost $5,000 to serve 100,000; it could cost far more, due to bottlenecks in compute, memory, and I/O
  • Account for Hidden Costs: In production, hidden costs like storage sprawl, cross-region data transfers, idle compute, and continuous retraining often make up 60% to 80% of total spend

2. Implement Value-Aligned Pricing Structures

Usage-Based Pricing with Guardrails

By maintaining a subscription model for the legacy core portfolio but monetizing differentiating AI features on a consumption basis, vendors can achieve a more scalable model that is better aligned to incremental value. This hybrid approach offers:

  • Predictable baseline revenue from subscriptions
  • Variable pricing that scales with actual usage
  • Protection against runaway costs through usage caps and rate limiting

Token-Based Models

Token-based pricing charges based on input and output tokens, providing volume discounts and transparent calculators. This approach offers:

  • Direct alignment between usage and cost
  • Transparency for customers
  • Ability to offer volume discounts while maintaining margins

2025 Real-World Example: Anthropic's Claude 4.1 pricing structure (as of September 2025):

  • Claude 4.1 Sonnet: $5/million input tokens, $25/million output tokens, $10/million thinking tokens
  • Claude 4.1 Opus: $20/million input tokens, $80/million output tokens, $40/million thinking tokens

The introduction of "thinking tokens" in 2025 represents a new dimension in AI pricing, charging separately for complex reasoning tasks.

Outcome-Based Pricing

Outcome-based pricing ties price to measurable results such as successful predictions, time saved or business metrics improved. This model:

  • Aligns vendor and customer incentives
  • Reduces customer risk
  • Allows for premium pricing when delivering exceptional results

3. Optimize Infrastructure Economics

Hybrid Cloud Strategies

Many organizations are shifting to hybrid cloud models that balance on-premises infrastructure with cloud resources, enabling cost flexibility—companies can use cloud services for high-demand AI workloads while relying on local infrastructure for cost efficiency.

Intelligent Resource Management

Companies should implement:

  • Dynamic Scaling: Use spot instances and preemptible VMs for non-critical workloads
  • Off-Peak Scheduling: Schedule retraining jobs during off-peak hours to take advantage of lower cloud rates
  • GPU Optimization: Improve utilization rates through better workload management and scheduling

4. Build Customer Education Into Pricing

One CFO of a Fortune 500 company described the problem: "It is frustrating that I have no idea what we're going to spend on AI this quarter. My business units have no forecast of what they are going to use".

Companies must:

  • Provide usage forecasting tools
  • Offer cost calculators and budgeting assistance
  • Create clear documentation about what drives costs
  • Implement proactive alerts before usage spikes

Part IV: The Future of AI Pricing

Market Evolution

The market is currently evolving through several phases:

Phase 1: Reality Check (2024-2025) - NOW

  • Price increases from major providers (Anthropic's tiered pricing, OpenAI's $200 Pro tier)
  • Customer pushback and churn (only 5% of users willing to pay)
  • First wave of AI startup failures and pivots

Phase 2: Consolidation (2025-2027)

  • Weaker players exit or get acquired
  • Pricing models stabilize around true costs
  • Clear winners and losers emerge

Phase 3: Maturation (2027+)

  • Sustainable business models established
  • Infrastructure costs decline through innovation
  • AI becomes economically viable at scale

Conclusion: Building for the Post-Crisis Era

Ed Zitron's warning about the subprime AI crisis from September 2024 has proven prescient. One year later, in October 2025, we're witnessing the early stages of the reckoning he predicted. OpenAI is burning through $7.8 billion per half-year while only 5% of its users pay. Anthropic has been forced to implement rate limits and dramatic price increases. The parallels to 2007 are striking: easy money, unrealistic valuations, and business models built on unsustainable economics.

But unlike the mortgage crisis, we have the opportunity to course-correct before systemic collapse. The warning signs are clear, and companies that act now can still build sustainable businesses.

The path forward requires:

  1. Honest Economics: Stop pretending AI can be delivered at traditional SaaS margins
  2. Value-Based Pricing: Charge based on actual value delivered, not arbitrary metrics
  3. Infrastructure Innovation: Invest in efficiency improvements that reduce underlying costs
  4. Customer Education: Help customers understand and optimize their AI usage
  5. Long-term Thinking: Build for sustainability, not just growth at any cost

As one analysis noted, there's a business there for other, less singularity-focused companies to have healthy business models, citing companies like Midjourney as examples. The companies that survive and thrive will be those that face reality now, price accordingly, and build genuine value for customers.

The subprime AI crisis isn't just coming – it's here. In October 2025, we're living through Phase 1 of the market evolution. The question isn't whether the crisis will deepen, but whether your company will be among the survivors who built sustainable businesses or the casualties who believed the hype. The choice – and the opportunity – is yours.

Get Started with Pricing Strategy Consulting

Join companies like Zoom, DocuSign, and Twilio using our systematic pricing approach to increase revenue by 12-40% year-over-year.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.