Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/6925c674-3858-8009-852a-bd8fdd5679ae
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===== (1)TPU 只直接影響「願意把 workload 鎖在 Google 生態」的那一圈人 ===== TPU ≠ 任意地方都買得到的白牌卡,它本質比較像: : * 想自建機房、做多雲、或現在已經 heavy 在 AWS / Azure 的客戶,不會因為「Gemini 很強」就突然說:好,我全部轉去 TPU。 * 他們要嘛: - 繼續瘋狂買 NVIDIA, - 要嘛等自家雲的 ASIC 成熟(Trainium / Maia), - 或者用一些新創 ASIC 方案。 → 這代表:Google 用 TPU 拿到 SOTA,並不直接等於「全球 GPU 需求下滑」,只等於「Google 這一塊的 GPU 天花板下降」。 (2)份額掉 vs. 大餅變多,哪個比較快? 就算你假設: * NVIDIA AI 加速器市佔,從 90% 降到 70–75%,被 TPU / Trainium / Maia 等吃掉一部分,巴倫周刊<ref>{{cite web|title=巴倫周刊|url=https://www.barrons.com/articles/nvidia-stock-ai-chip-revenue-98dfccf1|publisher=barrons.com|access-date=2025-11-26}}</ref> * 但同一時間: - 全世界在 AI 上的 CapEx 還在爆炸成長 - xAI 之類的公司喊的目標是 5 年內相當於 5,000 萬顆 H100 等效的算力需求等級,這不是開玩笑的規模。Tom's Hardware<ref>{{cite web|title=Tom's Hardware|url=https://www.tomshardware.com/tech-industry/artificial-intelligence/elon-musk-says-xai-is-targeting-50-million-h100-equivalent-ai-gpus-in-five-years-230k-gpus-including-30k-gb200s-already-reportedly-operational-for-training-grok|publisher=Tom's Hardware|access-date=2025-11-26}}</ref> 那產生的效果可能是: : 對股價來說,這會從「故事無敵爽」→「故事還不錯但沒有之前那麼神」, 是 估值倍數下修 的邏輯,不是「需求崩盤」的邏輯。 (3)TPU 強不代表「GPU 不重要」,只是代表「GPU 不是唯一選項」 多份報告/分析都提到: * TPUs 在某些 workload(尤其是 Google 自己堆好的 TensorFlow/JAX/Pathways + 大規模 LLM)上,perf/watt 和 cost/token 真的優於一般 GPU。CloudOptimo<ref>{{cite web|title=CloudOptimo|url=https://www.cloudoptimo.com/blog/tpu-vs-gpu-what-is-the-difference-in-2025/|publisher=cloudoptimo.com|access-date=2025-11-26}}</ref> * 但 GPU 還是有: - 通用性(不只 LLM,還有視覺、科學計算、各種雜七雜八) - 生態(CUDA、各種框架、工具、第三方庫) - 客戶可以買來自己插機櫃,不用一定選某家雲。 所以未來比較像: :
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)