Leaked Documents Reveal How Much OpenAI Pays Microsoft and the Soaring Cost of Running AI Models

After a busy year filled with major partnerships, constant investor interest, and talks of a possible IPO, OpenAI’s finances are being examined more closely than ever. Newly leaked documents now give insight into one of the biggest mysteries in the AI sector: how much the company actually pays Microsoft, its key infrastructure partner and largest investor.

Tech blogger Ed Zitron obtained financial details that reveal the revenue-sharing arrangement between the two companies. The figures provide a valuable look into OpenAI’s revenue growth and the high compute costs needed to operate and increase its powerful AI systems.

The documents tell a story of rapid growth, complicated financial arrangements, and a troubling question at the center of the AI boom: Are the leading AI firms making enough money to cover the huge costs of running their models?

Massive Revenue-Share Payments to Microsoft

The leaked documents show that Microsoft received $493.8 million in revenue-share payments from OpenAI in 2024. This number surged dramatically to $865.8 million in the first three quarters of 2025.

These payments are linked to an agreement where OpenAI shares 20% of its revenue with Microsoft. This figure has circulated widely for months, but neither company has confirmed it publicly.

The revenue sharing comes from Microsoft’s over $13 billion investment in OpenAI, which includes both cash and Azure credits that solidified their deep technical and commercial partnership.

A Two-Way Revenue Relationship Complicates the Picture

The leaked numbers only reveal part of the story. The financial relationship between the companies operates in both directions.

While OpenAI shares revenue with Microsoft, Microsoft reportedly also shares revenue with OpenAI from two major products:

– Bing, which uses OpenAI models to enhance search and create answers
– Azure OpenAI Service, which offers API access to OpenAI models for developers

A source familiar with the companies’ agreement says Microsoft gives back around 20% of those revenues to OpenAI.

Importantly, the leaked revenue-share payments seem to reflect Microsoft’s net revenue share, meaning the amounts after subtracting what Microsoft owes OpenAI for Bing and Azure usage. In other words, the actual revenue exchanges between the two companies are larger than the leaked figures indicate.

Since Microsoft does not report revenue from Bing or Azure OpenAI separately, it remains challenging to fully understand the financial relationship.

OpenAI Revenue: At Least $6-7 Billion in 21 Months – Likely More

With the widely used 20% revenue-share assumption, analysts can estimate OpenAI’s likely revenue.

From the leaked payments:

– 2024 revenue must have been at least $2.5 billion
– First three quarters of 2025 revenue must have been at least $4.33 billion

This shows significant growth, but other reports suggest even higher figures.

Industry reports previously estimated that:

– OpenAI made around $4 billion in 2024
– The company earned $4.3 billion in the first half of 2025

CEO Sam Altman says revenue is “well more” than reports of $13 billion per year and is on track to reach a $20 billion annual run rate by the end of 2025.

Altman even predicted that OpenAI could hit $100 billion in annual revenue by 2027, a bold forecast that highlights how rapidly the demand for generative AI is growing.

The Real Shock: Soaring Inference Costs

While revenue is increasing quickly, so are compute costs.

Zitron’s analysis, based on internal data, suggests:

– OpenAI spent roughly $3.8 billion on inference in 2024
– Inference costs jumped to $8.65 billion in the first nine months of 2025

Inference refers to the compute needed every time an AI model generates output—whether answering a question, writing code, or powering a chatbot session. Unlike training, which occurs occasionally, inference runs continuously and at large scales.

Every message sent to ChatGPT costs money. When billions of messages are processed each month, those costs escalate quickly.

Training Costs Are Mostly Non-Cash – Inference Is Mostly Cash

OpenAI’s deal with Microsoft includes billions in Azure compute credits as part of Microsoft’s investment. These credits cover much of the model training costs—making training primarily non-cash.

But inference is different. Inference compute usage is not covered by credits and must be paid directly in cash.

This creates a financial tension:

– Training = costly, but mainly covered by credits
– Inference = very expensive, and paid in actual money

If Zitron’s numbers are right, OpenAI might be spending more on inference than it earns in revenue, even with rising revenue.

OpenAI’s Expanding Compute Partnerships

For years, OpenAI relied almost entirely on Microsoft Azure, but as usage soared, it began bringing on more cloud partners:

– CoreWeave (GPU-rich cloud infrastructure)
– Oracle Cloud
– AWS
– Google Cloud (recently added)

This strategy seems focused on:

– Reducing dependence on one provider
– Taking advantage of specialized GPU availability
– Encouraging competitive pricing
– Ensuring supply as global demand for AI compute increases

The diversification underscores the intense pressure modern AI firms face to secure enough compute power to satisfy user demand.

Industry Estimates Suggest Even Higher Total Compute Spend

Earlier analyses estimated:

– $5.6 billion total compute spending for 2024
– $2.5 billion “cost of revenue” in the first half of 2025

The leaked numbers suggest that the figure for 2025 is tracking significantly higher.

Between training and inference, OpenAI’s total compute costs may now be reaching unprecedented levels in the software industry.

Is OpenAI Operating at a Loss – Even With Massive Revenue?

The leaked documents raise one of the biggest financial questions in AI today:

Is OpenAI spending more to operate its models than it earns in revenue?

If the answer is yes, this would mean:

Even the most successful AI company

With the largest enterprise deals

And the strongest brand in generative AI

Is still unable to operate profitably at scale.

This situation has major implications for:

– AI startups raising billions at high valuations
– Cloud providers rushing to build GPU clusters
– Venture capital enthusiasm for generative AI
– Governments assessing energy and compute demands
– The long-term economics of AI services

If the leader in the field is still operating at a loss, other AI companies with less revenue—or none at all—may face even greater challenges.

The AI Bubble Question Intensifies

For months, economists and tech experts have debated whether the AI industry is in a bubble. The leaked OpenAI financials do not confirm a bubble, but they do intensify the discussion.

Key concerns include:

– The costs of GPUs and AI infrastructure are rising faster than efficiency in inference
– User demand for free or low-cost AI tools conflicts with the high cost of running them
– Enterprises are experimenting with AI, but widespread monetization remains unclear
– The industry depends heavily on large cloud provider subsidies
– Competition is growing, including well-funded challengers like Anthropic, xAI, and Google DeepMind

If the most advanced AI company still relies heavily on cloud credits and spends billions more than it earns in cash, the rest of the industry may be even further from sustainable profitability.

A Turning Point for the AI Industry

The leaked documents provide only a partial look at OpenAI’s finances, but they come at a crucial time. The company is:

– Racing to launch more powerful models
– Exploring new revenue options
– Preparing for a potential IPO or further equity sales
– Scaling ChatGPT for business customers
– Building custom hardware and chips
– Expanding internationally
– Facing competition on multiple fronts

Whether OpenAI can shift from rapid growth to lasting profitability will shape the entire future of generative AI.

As the financial picture becomes clearer, investors, businesses, and regulators will pay close attention. The next two years could determine if AI companies can truly operate independently or if they rely on trillion-dollar backers to survive.

Article

Source: techcrunch.com

About author