‘End of an Era: How Silicon Will Decide BIM’s Future’ was first published on 18 December 2025 inside our Xpresso-4X newsletter. To gain early access to some of our best content, subscribe to Xpresso-4X now. It’s free!
FOR YEARS, THE AEC INDUSTRY HAS FRAMED THE FUTURE OF BIM as a software problem. Faster tools. Smarter automation. Better collaboration. But beneath every roadmap, every keynote, and every feature release lies a deeper force shaping what is—and isn’t—possible.
That force is silicon.
The next phase of BIM—and AEC software technologies as a whole—will not be decided by interface changes or subscription and software delivery models alone. It will be decided by the physics of modern semiconductors and the architectural assumptions embedded in the tools that architects and designers rely on every day.
The Assumptions CAD and BIM Were Built On
In our last major special feature on the CAD industry and semiconductors (see: Architosh, “Chip Technology, Geopolitics, and the CAD Industry,” 21 Jan 2022), we laid out some of the emerging changes shaping the semiconductor space, especially the rise of ARM processors. This time, we go deeper into the facts, trends, and stories shaping that change and the impacts on the CAD and BIM industry.
For more than two decades, professional design software has evolved within a remarkably stable computing environment. Revit ran on Windows. Windows ran on Intel x86 processors. And each new generation of CPUs delivered higher clock speeds and better single-thread performance.
The inflection point arrived quietly between 2015 and 2019, when Intel’s long-promised 10nm manufacturing process failed to arrive on schedule.
Those assumptions shaped everything from geometry kernels and solvers to regeneration logic and viewport behavior. Performance gains arrived reliably, year after year, with little need to rethink fundamental software architecture.1 2 3
That era has ended.
The inflection point arrived quietly between 2015 and 2019, when Intel’s long-promised 10nm manufacturing process failed to arrive on schedule. What appeared at first to be a temporary execution problem was, in fact, the first visible sign of a structural shift in how computing performance would scale going forward.4
When x86 Met Physics (High-Frequency Era)
Intel’s 10nm struggle wasn’t just about delays. It was a collision between decades-old architectural assumptions and the physical limits of advanced semiconductor manufacturing.5
To understand what happened at Intel—and why BIM is now entering a new computing era—we need to briefly visit the transistor level. (see image below or click here for a fun basic physical model explanation, or here for a complete visual history of Intel’s transistors, or here for a far more detailed and illustrated history of the transistor over time. All of those links are videos.)

SIDEBAR — How a Transistor Works. A transistor consists of a channel for electrical current, a source and drain on either end, and a gate that controls whether current flows through the channel (under the gate). A voltage at the gate creates an electric field that opens or closes the channel (in grey color above), switching digital “0’s” or “1’s”. Billions of these switches (transistors) toggle on and off billions of times per second, in modern semiconductors. To understand this more completely, watch any of the three video references listed above this graphic.
For 40 years, Intel mastered the art of shrinking these switches. Moore’s Law wasn’t just a prediction — it was Intel’s operational rhythm. Each new node promised more transistors, higher frequency, and therefore higher performance. 4 6 7
x86 especially thrived in this model. Its architectural assumptions were tied to:
- very deep pipelines (15-20+ stages)
- high clock frequencies (4.0 – 6 GHz turbo boosts)
- complex variable-length instruction decoding
- massive out-of-order execution windows
- significant speculative-execution chip design
In the High-Frequency Era, this design delivered extraordinary single-thread performance—the exact metric for which tools like Revit were optimized.2 3 In fact, most CAD and BIM users likely don’t know that CAD tools and 3D geometry engines, by their logical nature, aren’t viable candidates for multi-threaded coding and therefore multi-core chip acceleration. Instead, their speed depends mostly on super-fast single-core performance, and users would leverage faster workstations and computers by acquiring x86 CPUs with faster frequencies (measured in GHz).2 3 4
This was the moment the industry should have realized: the performance ladder Revit and other professional software had been climbing for two decades was breaking down.
However, by the mid-2010s, several forces converged at advanced nodes. At smaller nodes, transistors face hard constraints: reduced voltage headroom, increased leakage, higher wire resistance, and tighter timing margins. Intel’s x86 designs—optimized for deep pipelines and very high clock speeds—depend on stable voltage and precise timing across complex execution paths.5 8 9 10 13

SIDEBAR — How a Transistor Works: By the time Intel ran into problems at 10nm, the semiconductor industry had long moved to FinFET transistors. Instead of lying flat, the channel was rotated 90 degrees. This enabled the gate to wrap around the channel on three sides, thereby providing greater control over the source-to-drain current. These vertical channels offered another benefit: more of them could be stuffed into the shrinking real estate of microprocessors. But importantly, the metal interconnects for power and signal in chips are now getting impossibly close to each other and creating numerous electrical issues.
At 10nm, those requirements became increasingly difficult to satisfy. Metal interconnects grew so small that resistance rose sharply. Multi-patterned lithography introduced variability and yield problems. Timing closure became a challenge not just for experimental designs, but for mainstream high-frequency CPUs.9
For the first time in history, Intel’s new node produced chips that ran slower at dramatically lower frequencies and with poor performance compared to 10nm goals.
The company’s decision to fabricate these impossibly fine features using complex multi-patterned DUV lithography (rather than the emerging EUV) led to staggering defect rates and yield issues. It wasn’t that Intel forgot how to manufacture chips — it was that x86’s architectural demands ran headlong into fundamental semiconductor manufacturing physical limits.11 12
This was the moment the industry should have realized: the performance ladder Revit and other professional software had been climbing for two decades was breaking down. And the implications went far beyond Intel’s roadmap and the CAD and BIM software industry.
They signaled the end of the High-Frequency Era that professional software had grown accustomed to.
The Voltage-Limited Era Arrives
As transistor scaling continued, the industry crossed an invisible threshold. Voltage—rather than frequency—became the dominant limiting factor. Power density, heat dissipation, and energy efficiency emerged as first-order constraints.14 15 16
While Intel struggled, a different architecture— ARM—was quietly scaling in a direction better aligned with modern transistor physics.

A picture of the Amazon Graviton4 CPU. (Image: Amazon). Amazon has aggressively deployed its ARM-based Graviton series processors over x86 due to the lowered total cost of ownership. From the beginning, they offered substantially superior power performance per watt.
ARM was designed from the beginning for low-voltage, fixed-length instructions and high IPC (instructions per cycle). It never needed 5 GHz turbo modes or 20-stage pipelines. Its efficiency model was a better match for the emerging transistor world, one where performance per watt would displace the metric of just pure performance.
ARM thrives on:
- shallow pipelines
- simpler decode paths
- wide, power-efficient execution
- low-voltage operation
- excellent thermal behavior
- massive parallelism
As nodes shrink toward—and below—2nm, voltage becomes the hard limit. Frequency is no longer the performance driver. Performance per watt is the new dominant metric.15 16
What once made ARM ideal for mobile devices now makes it well-suited to modern semiconductor nodes. The Voltage-Limited Era.
Apple Silicon Changed the Conversation
When Apple introduced the M1 in 2020, it did more than launch a new processor family. It demonstrated that ARM-based CPUs could outperform x86 designs in both performance and efficiency within mainstream professional workloads.20
Apple’s success was often attributed to vertical integration or unified memory. Those factors mattered—but the deeper reason was architectural alignment with modern silicon physics.

Apple’s M1 “Apple Silicon” changed the entire trajectory of the PC industry when it was introduced in the fall of 2020, demonstrating breathtaking performance-per-watt advantages over both Intel and AMD. Analysts said at the time Apple deployed unique SoC advantages like its unified memory architecture, and thus downplayed the benefits of the ARM architecture itself. Qualcomm would later introduce equally stunning new Snapdragon X Elite chips without many of the same advantages Apple Silicon had. (see below).
Apple’s cores achieved high single-thread performance at relatively modest clock speeds, proving that the performance model long associated with x86 was no longer the only path forward.
In a world where voltage limits mattered, Apple’s architectural strategy was better aligned with the physics of semiconductor manufacturing 18 19 20
Qualcomm Proved It Wasn’t Just Apple
If Apple Silicon represented a controlled experiment, Qualcomm’s Snapdragon X Elite provided a broader validation 19
Unlike Apple’s tightly integrated SoCs, Snapdragon X Elite operates within a conventional PC framework: standard memory, discrete GPU support, and Windows drivers model. Yet it competes directly with Intel and AMD mobile processors in performance while delivering superior power efficiency.
Without Apple’s vertical integration or unified memory advantage, Qualcomm’s ARM-based Snapdragon X Elite still matched or beat Intel and AMD on:
- IPC
- sustained performance
- power efficiency
- bursty (CAD/BIM) productivity workloads
That matters for BIM and CAD. ARM is no longer confined to mobile devices or proprietary ecosystems. It is now a viable—and increasingly competitive—platform for professional computing.21 22
The Hyperscalers Follow the Physics
Nowhere is the shift more visible than in cloud infrastructure.23 24 25 26
AWS, Google, and Microsoft—the companies that define modern computing at scale—have all embraced ARM for general-purpose workloads. Custom processors such as AWS Graviton, Google Axion, and Azure Cobalt are deployed because they deliver more performance within fixed power and thermal budgets.
At hyperscale, energy efficiency is not a nice-to-have. It is an economic necessity. AI workloads only intensify that pressure—in some cases by 17x factors.
When the hyperscalers move, the rest of the industry tends to follow.25 26
Intel’s Countermove: 18A and Backside Power Delivery
Intel is not standing still. Its 18A process introduces two major innovations: gate-all-around transistors and backside power delivery. Together, they address many of the power integrity and routing challenges that emerged at advanced nodes.27
Backside power delivery, in particular, represents a fundamental shift in chip design, separating power and signal routing to improve timing and voltage stability.28

In RibbonFET, the “fins” of FinFET process technology are laid on their sides and then spaced vertically. They thus look like “ribbons” in the image above (Intel). The transistor “gate” is the silver block that the ribbons (silicon channels) pass through. Voltage applied to the gate either allows or prevents current from passing through the channel, resulting in a “0” or “1” at the transistor.
These are very meaningful advances. They will help Intel remain competitive. And both Intel and AMD have long ago altered their chip architectures to capitalize on the efficiencies of RISC (Reduced Instruction Set Computing) based chip design. Today’s modern x86 processors are actually a hybrid of CISC (Common Instruction Set Computing) and an ARM-like RISC design, with the remaining but unavoidable legacy baggage of x86 variable-length instructions and CISC-to-RISC conversion layers.29
Despite all this engineering and the backside power delivery (BSPDN) and gate-all-around transistors (GAA-FinFET), the underlying physics that limit x86’s deep pipelines and demand for high frequencies don’t go away. The architectural characteristics that favor ARM—low voltage operation, efficient execution, and heterogeneous integration—remain better aligned with the long-term direction of semiconductor physics.

PowerVia is Intel’s tradename for backside power delivery. (Image: Intel). This is a 3D section through a chip, which is made up of many layers. 18A innovates by separating power and signal wiring layers, thereby improving voltage stability and resolving numerous power- and current-related issues.
Intel may jump ahead at the “leading edge” chip manufacturing node. Intel will no doubt remain competitive over the next few years. It may even lead again in specific segments. But the long arc of semiconductor physics now bends away from x86.27 28 30
And Intel knows this.
AMD and Intel are rumored to have secret ARM chip designs in the works—plans they will never make public until they truly see themselves as having no choice but to stay competitive with ARM-based chips.31
Intel even has a fast-growing partnership with SoftBank, with the Japanese firm owning 2% of Intel. Why would ARM’s majority owner partner with its leading x86 chip rival?32
The architectural characteristics that favor ARM—low voltage operation, efficient execution, and heterogeneous integration—remain better aligned with the long-term direction of semiconductor physics.
The public answer is to support Intel’s foundry business and compete with TSMC at the leading edge. It is already rumored that Apple will become Intel Foundry’s first large-scale customer, manufacturing Apple’s ARM-based M-series chips for its Mac computers and then later A-series chips for the iPhone.33
Both Apple and Nvidia may likely become Intel Foundry customers—for part of its chip supply—bolstering US-based leading-edge-node manufacturing capacity.
Making ARM chips for others, like Apple, will have some material benefit when Intel decides to create and manufacture its own ARM chips.
What This Means for BIM and CAD
An industry shift from x86 to ARM is already underway, thanks to Microsoft’s robust push in that direction.34 But the process is a decade-long affair. The implications for legacy software stacks are massive. And no category is more exposed to this challenge than CAD and BIM.
Revit, Rhino, SolidWorks, and Maya were built during the peak of x86’s High-Frequency Era. Their engines, geometry kernels, solvers, and memory patterns all assume:
- a single-thread performance ceiling that keeps rising
- desktop tower thermals
- CPU-centric computation
- a predictable increase in clock speed
However, all of those assumptions are collapsing if not already collapsed.
Performance gains now come from parallelism, memory bandwidth, accelerators, and heterogeneous compute—not just from higher GHz. Software that depends heavily on single-thread CPU performance faces diminishing returns on legacy platforms.
At the same time, the market is already shifting:
- Core geometry kernels now support ARM natively.35
- BIM and CAD applications ship ARM-optimized versions for macOS and, soon, Windows on ARM.
- Designers increasingly work on ARM-based laptops, tablets, and cloud workstations.
- AI-driven workflows rely on GPUs and NPUs as much as CPUs.
The shift is no longer hypothetical. It is underway. To be sure, many of the biggest x86-based CAD and BIM apps are severely tied down to legacy code and dependencies. But competitors move quickly. And since the computing paradigm shift from the desktop era to the mobile-cloud-first era, the BIM industry, in particular, is facing the rise of well-funded BIM 2.0 startups attacking long-standing pain points.
Performance gains now come from parallelism, memory bandwidth, accelerators, and heterogeneous compute—not just from higher GHz.
The question is no longer whether BIM will move to ARM, but when exactly and who will be left behind when it does.
Heterogeneous Compute: GPU Geometry, AI Inference, Hybrid Evaluation
When it comes to the hybrid future of compute, ARM has led the industry since it was created specifically for power efficiency and embedded systems, where heterogeneity has long been the norm.36
Additionally, since Dennard scaling ran out of runway—the scaling law that was crucial to the single-core performance of the x86 architecture—parallelization and multi-threading were seen as critical to future semiconductor performance gains.37 38 39
The future of BIM will rely much less on single-core CPU-centric execution and much more on heterogeneous compute, where CPUs, GPUs, NPUs, and dedicated accelerators each handle different parts of the workload.40 41
Future BIM systems will depend on:
- GPU-based generative AI modeling
- AI-assisted constraint solving
- AI-driven modeling assistance
- GPU and NPU shape inference
- hybrid CPU-GPU-AI simulation/model evaluation
- GPU or NPU-driven AI training on proprietary data
- mixed CPU/GPU/NPU pipelines
- massive memory bandwidth
- low-latency parallel workloads
These compute examples encompass much of what was shown to attendees at Autodesk University 2025 this past fall, with the introduction of Autodesk’s Neural CAD engines. 41
Even leading geometry engines are investigating GPU acceleration, but less on the core geometric modeling kernel and more on simulation (CFD on GPU), visualization, and AI. But the big game changers are generative AI and inference modeling workflows for the BIM (AEC) market.
MORE: AU25: All About Autodesk’s AI Neural CAD Engines
In the architectural industry, new AI software technologies will leverage the features of heterogeneous chip architectures—especially those with larger on-chip fast memory (like Apple’s unified memory or AMD’s recent AMD processors with enough onboard memory to load smaller LLMs for proprietary firm data.

Image of Autodesk’s geometry-oriented AI foundation model, or Neural CAD Engine, where AI inference can generatively shape 3D model data. (Architosh)
While traditional geometric kernels (think Spatial or Parasolid) struggle to parallelize modeling operations, AI “model inference” can generate, test, predict, and evaluate options in parallel, working with both open data and proprietary firm data stored “on-chip” or in the cloud. At the same time, heterogeneous chips can do “on-device” AI training on large sets of firm data (previous building designs and their metadata).
All of this shifts future BIM workflows from entirely CPU-bound (sans rendering and viewport generation) to a heterogeneous mix of GPU and NPU AI compute streams, addressing matters like:
- spatial “test-fit” model generation
- “KPI-driven” iteration
- clash detection, object clearance evaluation checks
- building, energy code compliance checks/optimization
- building energy and carbon analysis checks/optimization
- building simulations/optimization
This will change not just standard architectural workflows, but the physics of how BIM performance scales at the silicon level. Massive building and infrastructure projects may not scale onto “on-chip” memory and may need transport between on-chip memory and system storage memory. At AU25, the folks at a leading workstation maker emphasized this point in discussing AMD’s latest AI chip, which is fundamentally far more heterogeneous than past designs.

Image of Autodesk’s AI foundation model, or Neural CAD Engine, powering a version of Autodesk Forma where AI inference can generatively shape and test-fit 3D building model data. (Architosh). Additionally, AI software could “train” on existing BIM model data of a more proprietary nature, in which case, sensitive company IP may prefer that data to sit in on-chip memory and be handled by on-device or on-prem AI compute rather than through public cloud compute.
The important fact about the rise of heterogeneous compute is this. In the future of BIM, the CPU is no longer the only star. It is part of an ensemble cast.
The Future of BIM: The Silicon Will Decide
The future of BIM will not be shaped by nostalgia or incumbency. It will be determined by which computing platforms scale best within the constraints of modern silicon.
The High-Frequency Era is over. The Voltage-Limited Era has arrived.
In this new environment, Intel’s x86 architecture has lost its automatic advantage. Hyperscalers have moved past it, prioritizing performance per watt over raw clock speed. Even Microsoft—the other half of the Wintel duopoly—has embraced ARM, developing its own ARM-based datacenter processors and aggressively advancing Windows on ARM through its partnership with Qualcomm following its acquisition of Nuvia.
The important fact about the rise of heterogeneous compute is this: In the future of BIM, the CPU is no longer the only star. It is part of an ensemble cast.
Qualcomm’s Snapdragon X Elite made clear that ARM’s advantages are not confined to the cloud. Its Oryon cores deliver exceptional IPC and industry-leading performance per watt, validating ARM’s relevance across both datacenter and client computing.
Legendary chip architect Jim Keller has noted that, at the instruction-set level, ARM’s efficiency advantage over x86 may be as little as 5%.42 That assessment matters. It suggests x86’s inherent disadvantages in the new voltage-limited era are often overstated. Moreover, x86 chip makers (Intel and AMD have been rapidly moving in the ARM-like direction to address heterogeneous computing and today’s emphasis on performance per watt. AMD’s new Ryzen AI Max Pro series chips emulate Apple’s M-series SoCs by integrating CPU, GPU, and NPU cores on a single die, with a unified memory architecture that allows all cores to access a single, large pool of system memory. (see: YahooTech, “AMD’s New Ryzen AI Max CPUs are Built for MacBook Pro Competitors,” 6 Jan 2025).
x86 is not fundamentally broken—and Intel and AMD’s engineering prowess should never be dismissed. But momentum matters. And the momentum continues toward ARM, not away from it. As further evidence, we can note SoftBank’s acquisition of Ampere Computing this year for USD 6.5 billion. Ampere makes ARM chips for the datacenter and counts Oracle as a major client. Both SoftBank and Oracle are key players in the USD 500 billion AI datacenter project known as Stargate.
x86’s dominance in the datacenter hasn’t disappeared entirely; just the assumptions that led to it.
In a similar way, the assumptions that led to x86 dominance in PCs have largely disappeared or changed. The challenging part is always the software ecosystems that need conversion. And this is where x86 holds a major advantage over ARM: software compatibility requires a commitment. At first, the progress is slow, but it builds quietly and then quickly.
Going forward, the physics point decisively in ARM’s direction.
For BIM and CAD industries built on x86-era assumptions, the mandate is clear:
Adapt—or risk being disrupted.
End Notes
For those who are interested in diving deeper into this article’s facts, claims, and arguments, we have over 42 annotated footnotes for this article, representing weeks’ worth of research and reading. These notes are available as a companion special feature on Architosh titled: “INSIDER Only: How Silicon Will Decide BIM’s Future — Footnotes.”
The article is exclusively available to our many Architosh INSIDER Member subscribers.


