Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/69408f2e-3acc-800c-a0c5-7ab1f3f9f65c
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: We can only answer this by building a simplified economic model and being explicit about assumptions. === We can only answer this by building a simplified economic model and being explicit about assumptions. Iβll do that carefully and conservatively, without using sources, as you asked. ==== You said to exclude: ==== * Future commitments * Capital investments * Datacenters under construction * Non-operating infrastructure So we are considering current operations only, meaning: * Revenue coming in right now * Costs to run existing models and services right now * Ignore R&D expansion, future bets, depreciation of new builds, etc. This is essentially: βIf OpenAI froze expansion today, what does the P&L look like?β ==== Letβs break revenue into three buckets: ==== ===== Reasonable assumptions: ===== * Tens of millions of active users * A smaller fraction paying * Average revenue per paying user β $20/month or less A conservative but plausible range: * $3β5 billion per year ===== This is usage-based and scales heavily with demand. ===== Reasonable assumption: * Large enterprise customers * High token volumes * Many customers subsidized by volume discounts Estimated range: * $2β4 billion per year ===== Includes: ===== * Enterprise ChatGPT contracts * Strategic partnerships * Specialized deployments Estimated range: * $1β2 billion per year ===== β $6β11 billion per year ===== For modeling, letβs take the midpoint: $8.5B/year ==== Now letβs estimate ongoing costs only, excluding expansion. ==== ===== This includes: ===== * GPUs * Power * Cooling * Networking * Redundancy Given global scale and heavy usage: * $4β6 billion per year Letβs use: $5B ===== Includes: ===== * Engineers * Researchers * Infra / SRE * Security * Support * Admin Even at high salaries, this is small relative to compute: * $1β1.5B per year Use: $1.2B ===== - Monitoring ===== * Compliance * Storage * Legal * Payments / fees Estimate: * $0.5β0.8B per year Use: $0.6B ===== $6.8B per year ===== ==== ### ==== $8.5B ===== $6.8B ===== ===== β $1.7B per year ===== ==== Depending on assumptions: ==== * Low revenue / high cost case β Profit β $0β500M * High revenue / efficient inference case β Profit β $3β4B A very reasonable confidence interval is: : ==== OpenAIβs reported losses are not because the core product is unprofitable. ==== They are primarily due to: * Aggressive scaling * Pre-buying massive compute * Training next-gen models * Building capacity far ahead of demand In other words: : If you want, I can: * Break this down per user * Model profit per token * Compare this to Google/AWS margins * Or explore what happens if inference costs drop 50% Just tell me.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)