Editor’s note: This feature ran first in our September newsletter Xpresso release back in the fall. To gain early access to our best feature content, subscribe to Xpresso now. It’s free!
Additionally, since the publication of this article back in September, two notable chip industry events have taken place. Apple eventually released its M1 Pro and M1 Max SoC ship for macOS computers, providing a clearer picture of what is possible for ARM Architecture chips for personal computers. At the same time, Intel released its 12th generation Intel Core processor lineup, making substantial generation over generation improvements. The feature below is presented as it ran in September.
Intro and Geopolitics
RISING TENSIONS BETWEEN THE WEST (specifically the United States) and China is ushering in large-scale change in the semiconductor industry. With nearly all leading-edge node semiconductor manufacturing in either Taiwan or South Korea—and not here in the US with Intel in its typical leadership position—the US government has stepped in to assist in a new era of industrial nationalism. In early June of this year, the US Senate passed legislation (the US Innovation and Competition Act, USICA) which includes USD 52 billion of federal funding to accelerate domestic semiconductor research, design, and manufacturing in what is known as the Chips Act for America.
Considered an issue of national security, semiconductors power nearly every type of digital device and certainly every type of computer system running an operating system, including military systems. The US has traditionally led the world in leading chip design and manufacturing. While design leadership remains in US hands, its national manufacturing champion in Intel has faulted. (see the lower half of the article, below).
On top of security concerns, the global semiconductor industry is behind and unable to meet demand. There is a ripe economic opportunity and every major global economy—from the US, EU, China, and Japan—wants more of the burgeoning action.
The new US Chips Act could spur the development of up to 10 new chip manufacturing factories. Intel has promised two new ones in Arizona (see above). Similar EU plans call for self-sufficiency in the design and manufacturing of semiconductors in the EU. In the US, which once had a 37 percent share of semiconductors and microelectronics production in 1990, today it only holds 12 percent share.
While China aspirationally seeks its self-sufficiency in semiconductors, they lack native companies that can develop and manufacture the equipment, non-wafer materials, and wafer materials used in the manufacturer of semiconductors. The US and EU dominate the critical equipment market with nearly zero equipment makers in Taiwan and a single-digit share in China. What China does have is a growing “fabless” chip design industry.
This is a moment of overlap between an old paradigm being slowly replaced by a new paradigm.
The global democratization of semiconductor industry design, development, and manufacturing are altering the possible futures for the computer software industry. This can have significant implications for engineering software in the decade ahead. The once stable “Wintel” based digital economy has largely de-coupled. In December of 2020, Microsoft announced that they were designing their own ARM-based chips for servers and Microsoft Surface devices. The server chips are for the company’s own Microsoft Azure Cloud services data centers. This play is largely imitating rival Amazon which designed its own ARM-based chip (Graviton 2) to power its AWS datacenters.
Microsoft Windows PCs are not the center of computing anymore but rather exist in our presence, much like petro cars amid the EV automobile revolution. While they remain in the CAD industries as our primary equipment, they are supplemented by a rapidly changing landscape of new types of smaller devices. The promises of cyclical economic improvement by the Wintel hegemony have firstly slowed and then rather bluntly collapsed in the past few years with Intel’s manufacturing hiccups. (see more on that below).
This is a moment of overlap between an old paradigm being slowly replaced by a new paradigm.
Moore’s Law: Then and Now
Since Intel’s founding and the emergence of the x86 CPU chip architecture, Moore’s Law has largely held its promise. Specifically, Moore’s Law—named after Gordon Moore, an Intel co-founder—says that the number of transistors on microchips doubles every two years. That requires a compound annual growth rate of 41 percent.
In the 70s and 80s, Intel’s chief microprocessor competitors such as Motorola and IBM largely kept the semiconductor industry red hot with advancements. AMD added competitive pressure in the 90s and into this century, and the competition for servers, in particular, led Intel to push for larger and more powerful server CPUs.
In the years from 1994 – to about 2007, we can see (in the chart) a massive gap between powerful server chips like IBM’s Power6, Intel’s Itanium 2, and AMD’s K10—all basically over 500 million transistors compared to the ARM Cortex A9 with less than 50 million transistors.
Yet, something changes from 2008 onward as ARM advances at a steeper rate than everyone else in the chip industry. (see the green line in the chart below.)
Suddenly, ARM licensee Apple comes out with the A7, a landmark 64-bit SoC with over 1 billion transistors. At this point, Apple’s iPhone chip now has more transistors than IBM’s Power6 from 2007, when the iPhone was introduced. In just six years, a chip in a phone had more processors than one of the world’s most powerful server chips from IBM.
ARM’s ascendance is just one factor in the global semiconductor tidal shift. Intel’s missteps in missing Moore’s Law is another—and we’ll get to that issue in a moment. No other company in the industry signifies the democratization forces in the global semiconductor industry as much as ARM, which licenses its chip designs to anyone who wants to work in the ARM ecosystem.
next page: ARM, Apple, and Intel
Reader Comments
Comments for this story are closed