When Bloomberg reported that Apple is planning to ditch Intel and design its own chips for Macs as early as 2020 many industry pundits and users were left wondering if this was a good thing or not.
Architosh’s first story on the news questions if the pro app markets, like industrial CAD systems in AEC and manufacturing, will cope well with such a transition.
MORE: Apple Leaving Intel on Macs by 2020—Impact on New Mac Pro
Of course, the question of using the ARM-architecture (which Apple designs its A-series CPUs around for iOS devices) for future Macs has been raised before. Many times, in fact. (see: Architosh, “More Apple Mac ARM Rumors—This Time MacBook Air,” 11 May 2011; and Architosh, “Editorial: ARM-based Macs are a bad idea for Apple’s growing Mac professional base,” 15 Jan 2015)
Building A Secure Enclave
“It was only a matter of time before Apple made the move to put their own processors on their desktops,” says Paul Norris, senior systems engineer for EMEA at cybersecurity firm Tripwire. Norris argues not only has Apple been successfully designing central processing units (CPUs) for their iOS devices for years, but by doing so they offer “a secure enclave between operating system and hardware.”
This move is likely about hardening security on Apple platforms and across their ecosystems. With spectacular security failures highlighting the ultimate vulnerabilities of end users and their private data—from the famed Sony hack to the stealing of Hillary Rodham Clinton’s emails to the story of Cambridge Analytica—there is no better time for Apple to take a stance on hardened security.
Intel Has Disadvantages
When it comes to security issues, Intel has built-in disadvantages to Apple in a CPU arms race. “Apple is taking advantage of the relationship between operating system and hardware of their OS X based devices to guarantee strengthened security,” says Norris. “Intel has to cater to a number of different types of operating systems and hardware devices, which increases the likelihood a vulnerability is identified.”
Intel has to cater to a number of different types of operating systems and hardware devices, which increases the likelihood a vulnerability is identified.
Norris makes the point in a private communication to Architosh that Apple has tremendous depth, expertise, and years of experience developing CPUs. “As Apple is not new to developing microprocessors, consumers should welcome this news.”
Security Ain’t Everything, Or Is It?
For the longest time, Moore’s Law has taught us that performance (processing speed) was the chief metric for evaluating chips. The faster the chip, the faster you can get your work done. Moore’s Law assured us that chips would continue to shrink in manufacturing process size and thus transistor counts would dramatically grow increasing performance over time. Moore’s Law, however, has hit a wall in recent times as smaller process size stretches the limits of manufacturing.
And performance is changing in the face of other advances in computer technology. The cloud with mobile devices is changing the market to think about splitting the work between cloud-compute and edge-compute. Some things (usually involving large data sets) are better computed in the cloud, while other things, like AI, require faster immediate results and are better computed on your mobile device out on the edge. Your iPhone X’s face recognition is one such example of the latter.
Devices as Gateways to Data
Today’s devices are gateways to large data living in the cloud. It is imperative that security become strengthened. Apple’s Mac computers have a possible future of being both edge-compute devices and cloud compute devices. ARM chips are already ideal for the data center, and future ARM-based Macs may find their way back into the data center, including Apple’s.
There is every chance that Apple stands a chance at taking on the major players in the CPU market.
This news may signal that Apple intends its new Mac Pro to be “modular” not just to please 3D visualization and film professionals with a machine with tons of customization, but instead be modular to enable it to fit into racks in data centers. The new modular Mac Pro may be modular from its outside shape and ultimate reconfigurability.
Customizing a Chip Around macOS
Paul Norris doesn’t know if Apple’s new processors for Macs will be backward compatible with Intel x86 architecture but instead says its too early to speculate. This is the issue that Architosh readers are very concerned with—that pro application stack compatibility. People live and die by applications, not just by security around their data.
Still, Norris doesn’t think folks should worry so much. “Apple has a good track record of ensuring that when they introduce a new platform, product or process, they invest a lot of money in research and development to ensure there is a seamless integration.”
As for performance, future Apple CPU Macs may be faster. “There is every chance that Apple stands a chance at taking on the major players in the CPU market,” says Norris. “Intel and AMD have a huge market share, but there is always room to welcome new competitors to this space,” he adds, “especially a competitor who has a large user base and a good track record for being secure. Maybe it’s time for a change in this market?”
Reader Comments
I’ve been discussing this idea since it first came up, years ago. My idea is that while Apple’s A series isn’t x86 compatible, and while that would cause problems, there might be a way around it.
More recent research has shown that the slowdowns related to a chip family emulating another chip family are mostly, about 80% due to a fairly small number of instructions. Those instructions require software emulation, and are common in a large part of usage. But it’s about a dozen to two dozen instructions.
My thinking is that, from what I know, and I could be wrong here, individual chip instructions in hardware are not patented, or copyrighted, just the pattern of how a chip is organized, etc. If so, then Apple can take the dozen, or so, instructions that cause most of this slowdown, and incorporate them into their chip, in addition to the ARM instruction set, as they’re doing.
If they do, then however it’s done, the system could point the x86 software to those instructions rather than needing to emulate them in software, the way it’s been done. If Apple could do this, then an A series, or whatever Apple may choose to name this new series, might run native x86 software at the same speed as an equivalent x86. So the MacBook, for example, instead of using an Intel M series, could run an Apple SoC at the same level of processing power.
Over time, as Apple’s chips continue to increase their performance level, those chips could exceed these of x86 in the different categories the chips live in.
As far as higher performance chips go, Apple could always decide to increase the power envelope to better compete with these higher level chips. Apple does have the advantage, so far, of not having hardware 16 and 32 bit instructions on board. Whether that would prove an impediment to running older x86 software, or whether Apple could work that out with their more modern chips, I don’t know.
But I think this is an interesting take on the situation, and if possible, assuming Apple is looking at it, could solve these compatibility problems people are concerned about.
I’ve been discussing this idea since it first came up, years ago. My idea is that while Apple’s A series isn’t x86 compatible, and while that would cause problems, there might be a way around it.
More recent research has shown that the slowdowns related to a chip family emulating another chip family are mostly, about 80% due to a fairly small number of instructions. Those instructions require software emulation, and are common in a large part of usage. But it’s about a dozen to two dozen instructions.
My thinking is that, from what I know, and I could be wrong here, individual chip instructions in hardware are not patented, or copyrighted, just the pattern of how a chip is organized, etc. If so, then Apple can take the dozen, or so, instructions that cause most of this slowdown, and incorporate them into their chip, in addition to the ARM instruction set, as they’re doing.
If they do, then however it’s done, the system could point the x86 software to those instructions rather than needing to emulate them in software, the way it’s been done. If Apple could do this, then an A series, or whatever Apple may choose to name this new series, might run native x86 software at the same speed as an equivalent x86. So the MacBook, for example, instead of using an Intel M series, could run an Apple SoC at the same level of processing power.
Over time, as Apple’s chips continue to increase their performance level, those chips could exceed these of x86 in the different categories the chips live in.
As far as higher performance chips go, Apple could always decide to increase the power envelope to better compete with these higher level chips. Apple does have the advantage, so far, of not having hardware 16 and 32 bit instructions on board. Whether that would prove an impediment to running older x86 software, or whether Apple could work that out with their more modern chips, I don’t know.
But I think this is an interesting take on the situation, and if possible, assuming Apple is looking at it, could solve these compatibility problems people are concerned about.
[…] (Via Architosh.) […]
Mel,
That is fascinating. I’ve not heard that before but it is very intriguing to imagine ARM supporting the x86 instruction set. I will look into this further. Thanks for the detailed share. Much appreciated!
Mel,
That is fascinating. I’ve not heard that before but it is very intriguing to imagine ARM supporting the x86 instruction set. I will look into this further. Thanks for the detailed share. Much appreciated!
Mel,
That is fascinating. I’ve not heard that before but it is very intriguing to imagine ARM supporting the x86 instruction set. I will look into this further. Thanks for the detailed share. Much appreciated!
Since it would only need to support a very small part of it, it shouldn’t be too difficult to manage.
Since it would only need to support a very small part of it, it shouldn’t be too difficult to manage.
Comments are closed.