Major hyperscalers (Amazon, Meta, Google, Microsoft) plan massive CapEx (~$600B), much of which is multi-year setup (turbine deposits, PPAs, data-center construction) rather than immediate servers. 🏗️⚡
The semiconductor supply chain (logic, memory, EUV tooling) is the real long-lead bottleneck for AI compute scale — not power or land in the near term. 🧩🔧
Timeline & where CapEx comes online
CapEx is staged: upfront deposits and construction today fund compute years out; not all $600B becomes running capacity this year. 🗓️🏦
Expect ~20 GW incremental US AI capacity this year; hyperscalers + labs drive most of it. ⚡
Labs (OpenAI, Anthropic) — money vs. compute needs 💸🖥️
OpenAI & Anthropic currently ~2–2.5 GW each; to sustain high revenue growth each needs to scale to multiple GW (Anthropic may need ~5–6 GW by year-end). 📈
Recent fundraises (OpenAI $110B, Anthropic $30B) can cover yearly rental compute costs (estimate ~$10–13B per GW/year), easing payability concerns. 🪙
Labs that precommit long-term get better pricing/margins; late buyers pay steep spot/short-term premiums and may have to use “neoclouds” or lower-quality capacity. 🤝📉
Contracts, pricing & depreciation
Long-term GPU contracts (5yr) lock cheaper supply; short-term deals (1–3yr or spot) can spike >$2/hr for H100-equivalent capacity. ⛓️💵
Depreciation debate: 5-year amortization common; but value-per-GPU may rise if model utility (tokens/value) increases — opposite of naive tech-depreciation argument. 📊🔁
Semiconductors: who owns supply & why Nvidia pulled ahead 🧠🔩
Nvidia secured early, large long-term wafer/memory allocations and orchestrated supply-chain coordination (PCB, memory vendors), leading to dominant access to leading-node capacity (N3 etc.). 🥇
TSMC/ASML and memory vendors hold critical leverage; Nvidia locked many upstream commitments early. 🔗
Key bottlenecks across timeline
Near-term (this year–2027): fabs/clean-room space, fab tool placement, memory fab lead times; data centers & power are solvable faster. 🏭⏳
Later (~2028–2030): EUV tool production (ASML capacity, optics by Zeiss, source by Cymer) becomes limiting. ASML tool cadence constrains wafer output. 🛠️🔬
EUV tooling & math to gigawatts
Rough estimate: ~2 million EUV passes needed per GW of advanced AI chips → ~3.5 EUV tools per GW. A single EUV tool costs ~$300–400M, throughput matters. ➗🔭
Even with aggressive ASML ramp (current ~70 tools → ~80–100 by decade end), total tools put an upper bound on possible GW/year from fabs.
Memory (DRAM/HBM) crunch — impact & mechanisms 🧠🔋
HBM bandwidth >> DDR; HBM is area-expensive (fewer bits/wafer). AI demand drives huge HBM/HBM4 demand; DRAM/HBM shortages/pricing hikes cascade to consumer devices (phones/PCs). 📱💻
SemiAnalysis: ~30% of Big Tech 2026 CapEx directed toward memory. Memory fabs take ~2 years to build — so supply lags demand. 🏭⏱️
Alternatives & system trade-offs
Using DDR instead of HBM yields far more capacity per wafer but dramatically lower bandwidth (orders of magnitude), shifting system design and latency/throughput trade-offs — could enable "slow" cheaper inference modes but loses high-value low-latency use cases. ⚖️🧩
Packaging (CoWoS, multi-die) and advanced interconnects help reduce cross-chip penalties; big gains come from co-design of HW/SW/models. 📦🔗
Google: very large TPU pods with torus topology (thousands of chips, neighbor-linked).
Amazon: hybrid approaches.
Topology, bandwidth, and memory capacity shape feasible model sizes and RL/training speed. 🕸️⚙️
Power & data centers — more flexible than chips 🔌🏙️
Power can be scaled with many routes: combined-cycle, aeroderivatives, ship engines, reciprocating engines, Bloom fuel cells, solar+storage, behind-the-meter solutions. Many suppliers exist beyond the 3 big turbine makers. 🔥🔋🌞
Unlocking peaker / behind-the-meter capacity or modular factory-built data-center blocks can rapidly increase build rate despite permitting/labor constraints. Construction modularization reduces skilled-labor bottleneck. 🏗️🚚
Space data centers — skeptical for this decade 🌌❌
Energy in space
Summarize any YouTube video instantly
Get AI-powered summaries, timestamps, and Q&A for free.