Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/6925c674-3858-8009-852a-bd8fdd5679ae
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== 2. CUDA + NVIDIA 是「誰都買得到、哪裡都能跑」 ==== 反過來看 GPU + CUDA: * 你可以: - 在 AWS / Azure / GCP / OCI 租 NVIDIA GPU - 找代工廠拉一整排機櫃,自己插 H100 / B200 - 買一台工作站,裝一張 4090/5090 在公司裡做研究 * 同一套 CUDA / PyTorch / TensorRT 技術,可以: - 雲端用 - on-prem 用 - 多雲用 - 客戶 demo 用 這個「通吃所有場景」的能力,是 TPU 完全不具備的。 所以對市場來說,問題不是單純問: : 「TPU 跑一次 token 比 GPU 便宜多少?」 而是更現實地問: : 「多少錢可以讓全世界大多數開發者、公司、雲端都能用同一套平台?」 這一題目前是 NVIDIA + CUDA 贏,非常多。
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)