OpenAI’s $125B Claim—Can It Really Happen?

   

Dan Schwarz, CEO of Futuresearch, recently shared insights from his company’s ongoing analysis of OpenAI and the broader Generative AI market. Futuresearch has recently focused on dissecting OpenAI’s revenue composition to forecast its growth prospects, publishing several analytic reports on the topic. What follows is a heavily edited excerpt from that conversation, covering both recent findings and previously unpublished projections from Futuresearch’s research.

What is your headline takeaway from your analysis of OpenAI’s revenue projections?

Futuresearch was the first to reverse-engineer OpenAI’s revenue streams before they were publicly disclosed, and we’ve been tracking them for over a year. OpenAI’s projection of $125 billion by 2029 is plausible in theory but highly implausible in practice. This relates to a recent report called AI 2027 that describes a scenario where a frontier lab experiences a runaway AI takeoff based on certain revenue projections. When we calibrate OpenAI’s projections against our expert forecasts, we find that hitting these numbers would require unprecedented exponential growth that doesn’t align with observed data or competitive realities.

What is your alternative revenue projection for OpenAI, and why is the range so wide?

Our 90% confidence interval for OpenAI’s 2027 revenue spans roughly $10 billion to $90 billion. This is an extraordinarily wide range because OpenAI is perhaps the most uncertain business possible to forecast. On one hand, they could monopolize multiple industries through rapid exponential growth. On the other hand, they could stumble due to ongoing litigation, talent exodus, and competition from other labs that already have better models in some categories. I personally lean toward the more bearish end of our internal forecasts. The uncertainty grows substantially when extending forecasts beyond 2027, making the 2029 projection even more speculative.

[Note: This forecast was later updated, reflecting new considerations; the revised figures ($11B −$70B, median $41B) are presented in the graphic below.]

Source: futuresearch.ai ; click HERE to enlarge.
How is OpenAI’s revenue currently split between different sources?

Contrary to early assumptions, API calls account for no more than 15 percent of OpenAI’s revenue as of mid-2024. The bulk comes from ChatGPT’s consumer tier and increasingly from ChatGPT Enterprise. When we published our first analysis, many people thought API was the dominant source, but we demonstrated that ChatGPT was driving most of their revenue, and this pattern continues today.


Ready for more? Upgrade to premium for exclusive content! ✨


What data sources and methods underpin your forecasts?

A good forecast blends data extrapolation with judgmental forecasting, adjusting for factors not captured in historical data. We start by extrapolating OpenAI’s initial revenue ramp (from $1B to $3.5B annually), but this provides only a crude baseline given the limited data points.

More importantly, we track competition closely by evaluating how good OpenAI’s models are compared to alternatives (Claude, Gemini, Llama, DeepSeek, etc.). This is challenging because benchmarks change constantly and don’t necessarily reflect actual user experience or specific use cases.

We also use judgmental forecasting techniques similar to prediction markets or Tetlock’s work for questions with limited direct data, such as lawsuit outcomes. This approach naturally yields wide intervals rather than spurious precision.

How do OpenAI’s growth projections compare to other tech giants historically?

Looking at historical data: Microsoft took 28 years to reach $100 billion in revenue, Amazon took 18 years, Google took 14, Facebook took 11, and ByteDance just 6. When you model a roughly 40% decade-over-decade acceleration, it implies that a frontier lab could theoretically reach $100 billion within four years—consistent with the AI 2027 scenario—but that would require outpacing even ByteDance by a wide margin. Achieving that level of growth would require monopolistic dominance, which is far from guaranteed given the competitive landscape.

Why are you skeptical that ChatGPT can continue as the primary revenue driver?

I don’t believe in ChatGPT as a long-term driver for massive revenue. It faces the most competitive pressure. Free tiers from Google (Gemini), Meta (Meta.ai), Anthropic, and others are already excellent, sometimes better for specific use cases, and often multimodal. Meta, in particular, is aiming squarely at ChatGPT, potentially making it an open-source commodity.

See also  Beyond Siri: The Real Apple AI Story

It’s hard to imagine tens of millions of people paying $20/month long-term when comparable or better free alternatives exist. While subscriber numbers were estimated around 23 million paying users (as of April 2024), churn is reportedly high. Every time you use ChatGPT, they’re likely losing money due to inference costs. Both Google and Meta have massive war chests compared to OpenAI, which has had to raise stupendous amounts of venture capital just to get this far.

What about API revenue? Can that become a significant growth driver?

I don’t believe in the API as a source of massive, exponential growth either. The API market is intensely competitive – it’s a race to the bottom on price. Google appears to be winning on the frontier of quality-for-cost right now. Models from Google, Anthropic, and open-weight options (Llama, DeepSeek) are excellent and often cheaper or better for specific needs.

Customers can switch providers “on a dime” – it’s easy to stop spending with OpenAI and move to Google or Anthropic, as my own company has done. The idea of API revenue growing into a “ten billion behemoth” seems very implausible. This dynamic makes it hard for OpenAI to preserve margins on either API or consumer tiers.

If neither ChatGPT nor API will drive massive growth, what could?

Agents represent the only credible path to scaled revenue that I find plausible, though still uncertain. If OpenAI reaches the higher end of our revenue projection ($60-90 billion by 2027), at least one-third would likely come from agent-based revenue – and they barely generate any agent revenue right now.

Success would hinge on OpenAI launching products specifically for software automation and ramping them to billions in revenue within 1-3 years. This could involve automating complex white-collar work through systems like their “operator” concept that can control a computer to perform tasks. Examples include financial analysis, software automation, or general computer operation.

The recent release of o4, despite delays, represents a step-function improvement in agentic flows, particularly web research. It shot to the top of our leaderboards for complex tasks requiring reasoning, tool use, and overcoming gullibility, surpassing even Gemini 2.5 Pro and Claude models on those specific tasks at that time.

Source: futuresearch.ai ; click HERE to enlarge.
Does OpenAI have a sustained technical advantage over competitors?

No – there’s no single “head and shoulders” leader. The advantages appear extremely fleeting right now. An edge gained one month (like GPT-4o in web research) could be lost the next. DeepSeek in China, Meta, Google – everyone is iterating rapidly.

The definition of “foundational model” gets tricky. OpenAI would need a decisive advantage in the capability that drives revenue. If agents are the key, they need the best agentic capabilities. o4 looks like reinforcement learning applied over a base model to perform tasks (tool use, search, code execution). Is the underlying base model the best, or is the agentic layer on top the key differentiator? It’s not entirely clear.

Unless a lab achieves a true, defensible breakthrough, the current state feels more like a continuous leapfrogging race where leadership changes frequently. This makes long-term revenue projections based on current leads very fragile.

What’s the path to a potential “winner takes all” dynamic in AI?

The most plausible path to that kind of scenario involves a positive feedback loop in research automation. If any frontier lab (not necessarily OpenAI – could be DeepMind, Meta, Anthropic, X.ai) can significantly automate its own software engineering and research processes (coding, running experiments, analyzing results), its researchers become vastly more productive.

If they can make research 3x or 10x faster, they could gain an insurmountable advantage. They use that advantage to further automate their research, getting faster and faster. This positive feedback loop is where a winner-takes-all dynamic could emerge, leading one company to pull years ahead of competitors who were previously only months behind. This seems to be the path towards the kind of monopolistic advantage OpenAI would need, and labs are likely working on this explicitly.

See also  Ryan Reynolds’ Ad Firm MNTN, Holders Raise $187 Million in IPO
How significant is the talent exodus from OpenAI?

The talent exodus from OpenAI is an underrated problem. The number of great researchers who have left to directly compete with OpenAI is probably unprecedented for a leading tech company. We’ve tracked key departures – most have gone to competitors like X.AI, Anthropic, and Ilya Sutskever’s Safe Superintelligence.

In AI, having the right brilliant people in the right place might be the deciding factor. Companies like Anthropic are attracting top talent from both Google and OpenAI, and I don’t see movement in the opposite direction. When brilliant graduates from top CS PhD programs choose between offers from frontier labs, there’s a good chance they might choose Anthropic over OpenAI, which could prove to be a decisive advantage in the long run.

How important are multimodality, coding, and robotics for future revenue?

Multimodality: For web-research agents, multimodal abilities (reading screenshots, extracting tables, parsing infographics) are crucial. However, Google appears to be leading in this area currently. Multimodal capabilities are critical for agents working with knowledge workers, including customer service agents who need to handle calls, talk, listen, and potentially join video calls.

Coding: This represents a multi-trillion-dollar opportunity that doesn’t strictly require multimodality. If OpenAI could create armies of superhuman coders, that’s a path to revenue. However, they aren’t clearly the best now – engineers have flocked to Claude for coding, while Gemini and GPT models remain competitive.

Robotics: Despite decades of high expectations, warehouse and manufacturing automation remain largely manual. Amazon’s robotics is limited, and Boston Dynamics has yet to deliver widely adopted commercial robots. Modern generative-AI techniques might ignite a new robotics wave, but history suggests we should brace for potential disappointment over the next decade. The analogy to self-driving cars is relevant – Tesla’s Full Self-Driving promises haven’t materialized as advertised, while Waymo’s more incremental approach led to slower but real deployment.

futuresearch’s May 2025 Deep Research Bench: ChatGPT-o3+search (default o3, not OpenAI Deep Research) leads in agentic web research. Click HERE to enlarge.
How does Anthropic compare in this landscape?

Anthropic faces similar challenges to OpenAI but with a more concentrated risk profile, being heavily dependent on API revenue, much of which comes via AWS. As API becomes increasingly competitive with pricing pressure, this creates significant vulnerability.

However, Anthropic has potential advantages. They focus heavily on interpretability – understanding and tweaking the internals of their models – possibly more than any other lab. If they can make breakthroughs in understanding why these models are so capable and how to better align them, they could gain a decisive technical edge.

Anthropic also seems to be winning in talent acquisition. Many brilliant researchers who left Google and OpenAI have gone to Anthropic. If safety and reliability become critical differentiators – if other models start engaging in problematic behaviors – Anthropic’s focus on alignment could become a major competitive advantage.

What’s the key takeaway for teams building AI applications?

Don’t assume OpenAI’s current or projected dominance is guaranteed. The market is fiercely competitive, and advantages are temporary. Design your systems to be model-agnostic; avoid locking yourself into a single provider.

Experiment with models from Google, Anthropic, Meta (Llama), DeepSeek, and others – you might find better performance or cost-effectiveness for your specific use case. Be prepared for rapid shifts in capabilities and pricing.

While OpenAI could achieve massive success through agents or a research breakthrough, their path is far more uncertain than their projections suggest. Focus on the practical utility and cost of different models for your application today, while keeping an eye on the potential for disruptive agentic capabilities in the near future from any of the major labs.

The post OpenAI’s $125B Claim—Can It Really Happen? appeared first on Gradient Flow.