Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/6960ed20-8dfc-8005-a089-1bb9023ccffa
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Assistant: Short answer: no, it probably won’t “eat through” global RAM demand nonstop until 2027 — but HBM AI-grade memory will s… === Short answer: no, it probably won’t “eat through” global RAM demand nonstop until 2027 — but HBM (AI-grade memory) will stay extremely tight until ~2026–2027, while regular DDR RAM pressure should ease earlier. Let’s break this down cleanly. ==== : ==== This is the key correction: * OpenAI was founded in 2015 * ChatGPT launched in Nov 2022 * The infrastructure shock started in 2023, not OpenAI itself So the “5-year compressed buildout” people talk about is roughly: 2023 → 2027/28, not 2022 → 2027 You’re about ~2.5 years in, not 3+. ==== When headlines say “RAM shortage”, they usually mix two very different things: ==== ===== - Used in PCs, laptops, servers ===== * Supply is already improving * Prices are cyclical * AI is not the dominant consumer 👉 This demand should normalize by 2025–2026 ===== - Sits on AI GPUs (H100, H200, Blackwell) ===== * Extremely hard to manufacture * Few suppliers (SK Hynix, Samsung, Micron) * This is the real choke point 👉 This stays constrained until ~2026–2027 So no, OpenAI isn’t starving all RAM — it’s pressuring HBM specifically. ==== AI infrastructure growth is S-curve, not linear. ==== ===== - Everyone races to build training clusters ===== * Over-ordering happens * Massive upfront memory demand ✅ We are here ===== - Core capacity mostly in place ===== * New orders = incremental, not exponential * Optimization starts ⚠️ Coming soon ===== - Same hardware reused more efficiently ===== * Better models, better memory usage * Inference > training 📉 Demand growth slows So demand peaks before infrastructure “finishes”. ==== More precise answer: ==== * 2024–2025: yes, very aggressive consumption * 2026: still heavy, but stabilizing * 2027: no longer dominant growth driver Why? * Training runs don’t scale forever * Inference scales differently (often less HBM per user) * Model efficiency improves faster than people expect Also: Google, Meta, Amazon, Microsoft catch up, so OpenAI’s relative share drops. ==== A lot of these wafer / HBM deals are: ==== * reservations * hedges * “just in case” capacity locks Historically, this leads to: * temporary shortages * followed by sudden gluts Semiconductor cycles always overshoot. ==== - HBM tight: until ~2026–27 ==== * Regular RAM tight: mostly eases by ~2025 * AI memory panic: peaks in 2025 * Post-2027: demand still grows, but sanely So no — OpenAI won’t be “eating the world’s RAM” nonstop till 2027. It’s more like 2023–2025 is the chaos window. ===== : ===== If you want, next we can talk about what would actually cause demand to crash early (there are a few real risks).
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)