The next decade in disk drives: Increases in density lead to decreases in performance

June 29 2020
by James Sanders


Despite the burgeoning popularity of SSDs, traditional hard drives remain more affordable when viewed in a direct dollars-per-TB comparison. While traditional hard drive manufacturers are touting advances to meet the moment, as hard disk densities increase, per-disk performance is dropping to precipitously low levels. Understanding how drives are changing is key to developing plans to maintain performance.

The 451 Take

Fundamental low-level changes in the way that spinning disks are manufactured, as a response to enterprise demands for higher capacity drives, are more noticeable than in previous generations. While drive manufacturers are taking steps to mitigate performance impacts on a per-disk basis, conscious thought is required to maintain minimum acceptable performance for enterprise workloads.

Drive manufacturers are pursuing increased capacity at the partial expense of performance. This is reflected in the parallel existence of explicitly conventional and shingled magnetic recording (CMR/SMR) product lines. That said, the primary driver of storage is, and will remain, cost for capacity. Relatively costlier products, across a spectrum of CMR HDDs, MLC/TLC SSDs or SCM, are favored on a per-application or per-circumstance basis.

Storage complexity may push trade winds toward the cloud – the on-demand nature of cloud storage and general cloudward trend of compute resources are likely to put a checkbox in the cloud column when capacity reviews are held. While the long-tail implications of COVID-19 have yet to be proven, a perceived benefit from on-premises storage is likely mooted from a sudden onslaught of remote working.

A decade in densifying disk drives

The previous decade saw crucial improvements to drive density, starting with breaking the 2TB master boot record barrier and ending with the introduction of 16TB drives. This eightfold increase in density was principally achieved through three key advances:

  • Helium-filled drives, debuting in 2012, allow manufacturers to increase the number of individual platters in a single 3.5-inch disk drive – the first generation of such drives integrated seven platters, compared with traditional five-platter drives. The use of helium results in reduced air shear, reducing motor power, and heat generation. With the advent of thinner platters, the current state of the art is a nine-platter helium-filled drive.

  • Shingled magnetic recording, commercialized in 2013, uses partially overlapping data tracks in a pattern reminiscent of roof shingles to increase density (by roughly 25%), with a noticeable degradation in write performance. The use of SMR in consumer products has been a source of controversy; despite this, manufacturers are planning for enterprise storage leveraging SMR.

  • Two-dimensional magnetic recording (TDMR) combines signals from multiple data tracks using an array of read heads, requiring more powerful drive controllers to reassemble the signals into the correct data. This can be used in conjunction with SMR or traditional parallel magnetic recording to increase density by approximately 10%.

  • These approaches, while important, have limits. The prospect of adding several more platters is stymied by physical space constraints and physics limitations inherent to the high-speed rotation of thin discs while maintaining structural integrity. Increasing density to keep pace with the storage demands of enterprises requires a more aggressive method of magnetic recording.

    Heat-assisted magnetic recording (HAMR)

    Touted popularly in the media for much of the last decade, HAMR – a recording method in which magnetic film is spot-heated when writes are performed – is anticipated to hit the channel by late 2020 (Seagate is accelerating rollout following increased demand for density as a consequence of COVID-19). In comparison with the 1.14Tb/in² areal density of CMR, disk platter manufacturer Showa Denko estimates HAMR platters will achieve 5-6Tb/in² 'in the future,' for capacities of 'approximately 70-80TB' per 3.5-inch drive.

    However, while the raw capacities of drives are increasing, traditional platter hard drives have largely been stuck in terms of performance as SSDs steal the spotlight for high IOPS and low latency. The inherently mechanical design of traditional disk drives is an encumbrance – the contents of an entire platter cannot be read at once, requiring an actuator to move a drive head around the platter to read or write data.

    A return to dual-actuator drives

    Seagate is publicly touting a dual-actuator drive as a solution to this problem. This is not unknown territory – Connor Peripherals (acquired by Seagate in 1996) attempted this in 1994. That attempt – the Conner Chinook – used a 5.25-inch form factor with 3.5-inch media, with actuators on opposite ends of the drive. This design introduced vibration issues under load, leading to higher failure rates.

    Seagate's new attempt – branded MACH.2 – uses two actuators on the same pivot point, with the top and bottom halves of the drive operating independently from each other, essentially doubling the performance of a given drive. These are anticipated to be drop-in compatible with existing 3.5-inch drives.

    Thinking in volumes

    High I/O workloads such as databases are migrating (or have migrated) to SSDs, although this trend does not absolve platter hard drives of minimum IOPS requirements, and per-disk IOPS performance takes a hit as capacities increase, particularly with the use of SMR. Mitigating this phenomenon is a requirement for enterprise storage appliances deployed as part of a traditional on-premises or hybrid cloud component.

    That said, the cheapest way of mitigating this is simply overprovisioning. This reduces the absolute utility of having denser drives in the first place. Strategies such as RAID60 – a combination of block-level striping of RAID0 with the distributed double parity of RAID6 – once reserved for high-availability applications, may be required to achieve average performance using high-density drives.

    Flash caching is likely a more performant, yet potentially costly, option, although this requires logic to determine what files need to be put on flash caches, and thresholds for when caches should be overwritten, in order to preserve the lifespan of the caching drives. While this logic exists today, adjustments may be necessary to compensate for the absolute individual performance of higher-density drives and a proliferation of shorter-lifespan flash, namely QLC. Maintaining performance may require an increase in the percentage of flash storage used to cache denser drives.

    Alternatively, a tiering system using CMR drives for live data, with HAMR drives for nearline data, could be a cost-efficient means of solving the performance problem.