Is 2026 finally the year for data center storage to crossover to SSDs?
By Roger Corell, Sr. Director, AI and Leadership Marketing
November 24, 2025
Text size
Display theme
Our pages are responsive to your system settings and browser extensions for optimal experience
If the 2010s were about spinning every last bit out of hard disk drives, the mid-2020s are about something more existential: whether the relentless rise of AI—and the heat, power and density accompanying it—forces a profound rethink of storage architectures. Hard drives still win the headline $/TB metric that got them into every hyper-scaler but, multiple vectors are now converging that demand revisiting storage systems: a slowing pace of HDD innovations paying off, AI increasing the value of what was once considered “cold data” and a dramatic step-function in performance and energy constraints from AI buildouts. Put together, 2026 starts to look less like a gentle inflection and more like a pivot: not the “death of disk,” but the year SSDs expand from the fast tier to include the capacity tier for a much larger slice of data-center deployments.
AI changed the math on power, space and latency
Let’s start with the most unforgiving constraint: electricity. The latest update from Lawrence Berkeley National Laboratory estimates U.S. data-center energy use climbed from ~76 TWh in 2018 to ~176 TWh in 2023 (about 4.4% of U.S. electricity)—and could reach 325–580 TWh by 2028, depending largely on AI server growth and cooling choices.1 That is a sober acknowledgement that accelerated servers (GPUs) and the critical IT around them are reshaping power profiles and pushing sites toward lower PUEs (a measure of how efficiently data centers use energy for IT infrastructure) but much higher absolute loads.
Power scarcity is no longer theoretical. Operators are seeking creative workarounds like co-locating with generation companies or tapping high-voltage interconnects to keep scaling. In fact, some AI data-center developers are pursuing dedicated power plants to bypass grid interconnections altogether. New or restarted nuclear facilities for the sole purpose of datacenter builds are even in play. Energy procurement is now on the critical path for AI expansion.2
Storage is part of that reality. When every watt must justify itself against GPU throughput, utilizationand scale, performance-per-watt and capacity-per-rack calculus favors SSDs. Independent analyses show high-capacity QLC SSDs delivering materially better TB/W and higher effective rack utilization than nearline HDDs, enabling denser AI infrastructure within the same power and floor space envelope.3
HDD innovation is real — but the returns are getting narrower
The hard-drive industry has not stood still. Seagate’s HAMR achieved 28–30 TB capacities in 2025, and the company announced a path to even loftier capacities.4 Western Digital and Toshiba continue to advance, and as of mid-October 2025, Toshiba even verified 12-platter stacking in a 3.5-inch form factor, projecting 40 TB drives by 2027.5 These are impressive feats of materials science, thermal control and servo wizardry.
But the nature of the gains has shifted. Higher areal density, extra platters and new recording assistance (HAMR/MAMR) chiefly expand capacity; they do not improve random performance or reduce latency—mechanical limits are stubborn. As capacities rise, IOPS/TB drops, rebuild windows stretch and background processes eat more of the limited head time. The capacity curve is separated from the performance curve. That’s why SSD vendors emphasize metrics like IOPS/TB, IOPS/W and TB/W to highlight how solid-state meets performance requirements while being extremely power and space efficient.
Meanwhile, SSDs have been quietly leapfrogging on solution innovations
The old story of “SSDs are fast but small and expensive” has evolved to the new story of “SSDs are fast and huge –and improving $/TB.” By late 2024, five vendors had >60 TB SSDs in market or sampling; by 2025 several vendors were touting 122 TB devices – with one vendor, Solidigm, shipping 122TB — with public roadmaps to ~245 TB drives in 2026.6 Underlying those leaps is a NAND roadmap that keeps compounding: SK Hynix confirmed mass production of 321-layer QLC in 2025 and Samsung has telegraphed even more aggressive layer counts.7 Of course, as Solidigm, the current high-cap leader points out, layer count isn’t the only game in capacity scaling. Bits/cell, cell size, cells per layer, component size optimization and packaging are all coming into play.
And SSD performance isn’t plateauing. PCIe Gen 5.0 is mainstream in xPU servers with the role of storage expanding beyond xPU utilization and into solution level cooling efficiency. Most prevalently, this is seen in all-liquid cooled NVIDIA GB300 servers. Enterprise grade liquid-cooled SSDs pioneered by Solidigm helped achieve this fully fan-less design improving GPU density and cooling cost.
The cost gap has narrowed
Let’s tackle $/TB head-on. Hard disk still leads on raw capacity economics. Western Digital’s modeling puts HDDs multiple-X cheaper per TB than SSDs through the middle of the decade.8 That headline matters for cold archives and deep-capacity object stores.
But three dynamics are compressing the real gap:
- NAND scale and mix. After a brutal downcycle in early 2024, suppliers cut output and re-optimized for higher-margin nodes. Prices begin rising in early 2025, but the structural trend is toward higher density, which improves cost/TB over time. Several trackers show supply tightness continuing into 2026 — but with density gains somewhat cushioning unit economics.9
- Performance-based TCO. When power, cooling and time enter the model (job completion time, GPU power density scaling, GPU utilization), SSD-first architectures frequently reduce total cost despite higher $/TB. The industry is increasingly stressing time-aware TCO because faster storage shortens execution windows and frees compute cycles — gold in the AI economy.10
- Footprint and energy. For equivalent capacity and throughput, all-flash can shrink rack count and power draw up to 90 percent. Third-party analyses benchmarking high-capacity QLC SSDs versus 30+ TB nearline HDDs show meaningful gains in TB/W and datacenter density — translating into fewer racks, fewer PDUs, fewer network ports, and improved PUE.11
In practice, operators report a widening set of workloads — databases, analytics, hot object tiers, AI feature stores, vector search and fast restore tiers for cyber-resilience — where “SSD wins on TCO” is no longer controversial. Deep-cold archives and immutable backups remain HDD strongholds for now, but the boundary is moving.
But isn’t HDD capacity growth accelerating again?
Yes and no. The short answer: expect bigger HDDs through 2027–2030 — but not proportionally faster ones. Seagate’s HAMR shipments at 28–30 TB and channel availability confirm the tech is real, and roadmaps to 100 TB this decade keep the medium relevant for exabyte-scale archives.12
The strategic rub is that these capacity advances defend HDDs in cold storage but work against the responsiveness and efficiency needs of latency-sensitive, power-constrained AI and other warming data. Stated differently: HAMR helps HDDs stand their ground where they already shine; it doesn’t expand into SSD space where warm data economics are decided.
If 2026 is a crossover, it won’t be a single graph where SSD $/TB intersects HDD $/TB. It’ll be driven by a blend of workload and infrastructure efficiency requirements:
- Hot object and metadata tiers in object stores go flash-first for agility, efficiency, and ransomware-resilience (fast restores).
- AI data pipelines standardize on PCIe SSDs throughout the data pipeline to save power and space and maximize GPU utilization and scalability.
- Primary databases and analytics continue their march to all-flash, now at multi-hundred-terabytes per node densities that once demanded excessive number of HDDs.
- Local and edge deployments choose SSDs because space, power and other locality constraints are bounded long before budget.
- Greenfield deployments are rapidly becoming the domain of all-SSD architectures to design in for power and space optimization and, as future proofing for scaling with increases in performance, capacity and efficiency requirements.
- Archival/cold tiers will remain dominated by HDD (and tape), but with a growing flash “landing zone” that schedules migration during grid-friendly windows.
The 2026 call: a critical pivot, not an epitaph
In 2026, the ingredients line up at the same time:
- AI power squeeze becomes a governor. A growing share of new capacity is sited where power is scarce, expensive or delayed—making density per rack and TB/W decisive. This is a world where every watt matters.13
- SSD capacities normalize at “many-petabytes-per-rack.” 122 TB is here; ~245 TB drives are projected in 2026, collapsing drive counts for PB-scale tiers.14
- HDD supply looks tight just when demand is peaking. Long lead times and constrained SKUs push buyers to qualify flash alternatives for projects they must deliver now.15
HDDs aren’t going away. The exabyte tide guarantees a massive role for spinning disk in archives and colder object tiers through this decade and beyond. But the center of gravity in new data-center deployments is shifting. AI’s performance hunger and the grid’s constraints change the optimization target from “cheapest bytes” to “fastest, densest bytes within a power envelope.” SSDs happen to be the right answer for a lot more of those problems than they were even two years ago. That’s why 2026 looks like a crossover year in spirit, if not in a single metric. This is the year architects stop treating SSDs as relegated to the performance tier and start treating them as the default for workloads demanding an optimal mix of performance, capacity and efficiency.
The content is paid for and supplied by advertiser. The Washington Post was not involved in the creation of this content.
Sources
12024 United States Data Center Energy Usage Report
2 Can US infrastructure keep up with the AI economy?
3 QLC vs HDDs: The Battle for High-Capacity Storage
4 Seagate lau nches 28 and 30 TB HAMR hard drives for edge AI and NAS
6 The case for high-cap SSDs overtaking HDDs as datacenter standard
7Samsung reportedly plans to leapfrog to 430-layer NAND in 2025
8 Data Growth Is Inevitable. So Is the 6x Cost Advantage of HDDs
10 Total Cost of Ownership (TCO) Model for Storage
11 QLC vs HDDs: The Battle for High-Capacity Storage
12Seagate Ships 30TB Drives to Meet Global Surge in Data Center AI Storage Demand
13 AI Data Centers, Desperate for Electricity, Are Building Their Own Power Plants
0:00/0:00
Is 2026 finally the year for data center storage to crossover to SSDs?
Voice is AI-generated. Inconsistencies may occur.