AMD Rome restores the rule of Moore's law in datacenters

August 15 2019
by Daniel Bizo

Fabless chip designer AMD's new server processors, codenamed Rome, present makers and buyers of IT infrastructure with their best chance in more than a decade to loosen Intel's grip on them. For relatively young cloud operators, this is the first time a viable second source to Intel processors has emerged. Crucially, Rome processors, sold under the EPYC brand, resume Moore's law and deliver its benefits in the form of a major jump in performance density and power efficiency. All the while Intel is still probably 12 months away from reaching high-volume production of its 10-nanometer server chips – a major window of opportunity for AMD to win customers over and re-establish its footing. Our analysis suggests it has the best chance in cloud and high-performance computing, where Rome achieves superior energy efficiency at high utilization.

The 451 Take

AMD is back in the datacenter game. This is the single most important take-away from the launch of its latest server processor, Rome. It took AMD the better part of the decade to regroup and mount another attempt on the datacenter market, while Intel, uncontested, made billions from the rapid swelling of web and public cloud infrastructures. Luckily for AMD, the launch of Rome comes at a time when Intel is seeing a long delay to its next-generation silicon technology. Infrastructure customers, particularly larger ones, will welcome choice and the possibility of improving their infrastructure efficiency. Intel is set to lose some market share in the process but might find meaningful competition reinvigorating. Expect brisk technology development and the revival of Moore's law.


The last time AMD found itself with an advantage was 2003, when it introduced its new chip design that was what the market wanted: an affordable, moderately powered but well-performing 64-bit processor. Both Intel's high-end 64-bit design, Itanium (years late and requiring brand new software), and its mainstream x86 architecture at the time proved to be technological dead ends that took Intel three years to back out of and catch up with the competition. By the time it did, AMD had taken about a quarter of the datacenter processor market.

Since then, Intel has not missed a beat. Its relentless development of server chips not only stopped AMD's gains but marginalized it in the datacenter by the start of the decade and wiped out all 'big iron' architectures with the sole exception of IBM's Power systems and mainframes. Despite efforts from AMD and various ARM-based designers, almost all servers in use today run on Intel processors, including all hyperscale infrastructure built over the past 10 years.

This may finally change. Much as it did 16 years ago, AMD rediscovered its chip designer know-how just when Intel has fallen behind on its technology roadmap. Chief of Intel's problems is the long delay in the rollout of its next-generation manufacturing technology, dubbed 10-nanometer, which means it has, for the time being, been unable to reap the performance, energy or cost benefits of Moore's law. Even though Intel has started ramping its production of 10-nanometer mobile processors and vows not to make another misstep with future technologies, its 10-nanometer Xeons (Ice Lake) are still anywhere between 9-15 months from volume production, depending on the pace of yield improvements.

This gives AMD an opening. All indications are that the chip designer and its manufacturing partner TSMC have their act together: AMD has developed a potent processor architecture on TSMC's cutting-edge 7-nanometer semiconductor technology with impressive results. AMD's Rome comes close to or outperforms Intel's latest top-end Xeon (Cascade Lake) but does so at much lower power consumption.

In the interest of manufacturability, AMD's architects decided to make Rome even more modular than its predecessor, Naples, which laid the foundations for Rome's arrival. Rome comprises up to eight compute chips, each of which contain up to eight cores and cache memory, and one auxiliary chip to handle memory and all internal and external communications – like an eight-processor server on a package. Smaller chips are easier to manufacture (lower chance of defects) and can be flexibly mixed and matched to create final products. This is not the first time a processor maker went for multi-chip packaging, but AMD clearly took modularity to another level with Rome.

The power is with Rome

Power is the key in AMD's value proposition here. Intel doesn't need more performance nearly as much as it needs to rein in its power consumption. That is difficult without a new manufacturing process. Unlike AMD's top-end processors, which are rated 225 watts in cooling requirement and can be configured up to 240 watts for peak performance, Intel's most powerful models are rated for 400 watts of maximum sustained thermal power. Thermal power at that level calls for direct liquid cooling, a change that creates friction in its adoption, particularly for any large-scale installations – a situation that resembles that of the early 2000s.

This is because Intel still manufactures Cascade Lake server chips on its five-year-old (albeit optimized) 14-nanometer technology and has planned for only incremental architectural tweaks – its upcoming 10-nanometer generation with a more significantly improved architecture should have already gone into production by 2018. This means that the sole viable way for Intel to introduce major performance increases across the board and defend against a resurgent AMD was to increase the core count by packaging two pieces of silicon together and allow for more power than the 205 watts available for single-chip parts.

Intel's elongated struggle with its 10-nanometer rollout also means that it fell behind the efficiency improvement cadence of Moore's law. AMD's Rome systems bring a much-needed jump in energy efficiency, as witnessed by the Standard Performance Evaluation Corporation's (SPEC) power benchmark, to put the industry back on the historical pace of improvement and also put pressure on Intel to deliver the same.

Figure 1
Figure 1 – Composite power efficiency over time, 2-socket servers, SPECpower_ssj2008 (best results only) The Standard Performance Evaluation Corporation, compiled by 451 Research

Disclosed performance data suggests that AMD built Rome to appeal to web technology and public cloud operators rather than the enterprise. Running enterprise business logic and database management systems (SAP Sales & Distribution and SPEC benchmarks), Intel still appears to retain a comfortable (20-40%) lead in per-core performance, which helps tuning for latency-sensitive applications and with enterprise software licensing costs in many scenarios while energy consumption is less of a concern (and often pales in comparison with software and services costs). Also, enterprise servers tend to low utilization (under 50% but typically much lower) and only a few business-critical applications take advantage of the peak performance levels of current servers.

It is the cloud (and virtualized hosting services) where the energy efficiency and performance density Rome delivers matter a great deal. Importantly, Rome will be able to fulfil its potential only at high levels of utilization, data from the SPEC suggests, which is what cloud operators aim to achieve on swathes of their infrastructure through aggressive workload consolidation and scheduling of jobs based on spot pricing.

AMD Rome systems shine at high utilization but are not necessarily more energy efficient than Intel systems across the load curve, as shown in Figure 2 below. SPEC's power_ssj2008 benchmark data (which simulates a Java-based application that stresses various parts of the system to measure performance against power consumption) indicates that servers built around AMD's newest processors pull away from Intel in energy efficiency against current Xeons only above 50% load, under which Xeon systems deliver the same performance for similar or less energy.

Figure 2
Figure 2 – Performance versus power consumption profiles, 2-socket servers, SPECpower_ssj2008 The Standard Performance Evaluation Corporation, compiled by 451 Research

This tipping point might move closer to 60% with 400-watt Xeons, for which there is no power efficiency data yet, but even if they improve Intel's competitive position, they will require liquid cooling in order to run their full performance potential. It must be noted that relative performance between markedly different architectures varies across workloads and can easily swing by 20-30% or more, but the underlying character of Rome remains pronounced: the busier the better.

All this indicates that in the next 9-15 months, AMD Rome will match or lead Intel in energy efficiency at high loads and do so without the need to change cooling, a highly attractive proposition to IT infrastructure services providers and webtech operators alike that are able to consolidate workloads and schedule jobs for optimal utilization of their assets. This profile also makes Rome interesting for supercomputing where peak performance per available power is one of the key metrics by which customers choose an architecture.

Intel, at the same time, will be able to closely match or outperform AMD's offerings in raw performance, limiting the damage to its image as a technology leader. We also expect Intel to start very soon to build up expectations around its 10-nanometer Xeons to make customers hold out some of their investments and slow down AMD's advances in the datacenter. Intel will also be cushioned from an existential threat by AMD's supply constraints that caps its near to midterm potential, even though market share losses seem inevitable after 10 years of uncontested hegemony. Server processors have become a fairly high-volume business that consumes about as many cutting-edge silicon wafers as hundred million high-end smartphone processors would and is growing.