Keeping up with the competition can be difficult no matter the size of the company. Take Intel, which has been struggling to keep up with competition while remaining at the 10nm node (for instance, with its 10nm “SuperFin” technology).
Last summer, Apple announced that it would be booting Intel silicon to make way for its own in-house processors. This decision was followed by Apple’s news of its own in-house processors for Macbooks. The reasons for this switch remain inexplicit but likely center around Apple wanting full ownership of its products.
The result of this switch was Apple’s M1 processor, which includes 16 billion transistors on a 5nm process. If the age-old Mac vs. PC debate ever died, there’s no doubt that this whole episode has brought it back to life.
Apple’s M1 is the company’s first in-house designed Mac processor. Image used courtesy of Apple
Intel, through Tom’s Hardware, has shared a slide presentation (not yet published on its own site) showing its processors’ superiority. Through in-house testing, Intel benchmarked the i7 and Apple’s M1 directly against one another.
Why Benchmarking Can Be Subjective
Before analyzing the metric results, it’s worth discussing how companies typically benchmark the performance of their processors.
Benchmarking two processing units against one another typically involves standard performance criteria when running a baseline set of instructions. There are hundreds of benchmarks meant to test different performance metrics for a CPU and provide objective comparisons. Benchmarking can be extremely invaluable in many cases; however, the issue, in this case, is that it’s nearly impossible to get objective results when Intel is the one comparing its own processors against other companies.
An example of how benchmarking two different architectures, like X86 and Arm, makes it difficult to make a good comparison. Image used courtesy of Faheed Syed
Both processors bear inherent differences since they were designed for different applications—though the general workload may be the same. For starters, M1 is built on an Arm-based architecture, while i7 is x86. M1 is designed to work specifically on Apple’s operating systems, while i7 mainly supports Windows. Each processor also interfaces with different memory subsystems and peripherals.
Since each processor’s optimization is different, how can we objectively determine which processor is “better” by utilizing broad benchmarks?
What the Results Say
The benchmarking results turned out to be mostly non-definitive for a few reasons.
For starters, the results don’t compare analogous hardware systems. For example, Intel compares a Core i7-1185G7 running Windows 10 Pro with a MacBook Pro running macOS 14.0.1.
However, when comparing power efficiency, Intel changed its benchmarked processor to a Core i7-1165G7 inside of an Acer laptop and compare it to a Macbook Air instead of a Macbook Pro. This is problematic because the Air has a notably smaller battery than the Pro or the Acer.
i7 vs. M1 content creation performance. Image used courtesy of Tom’s Hardware and Intel
Beyond this, Intel used x86 friendly benchmarks to compare the two processors. For example, when comparing AI performance, Intel utilized AI tests by Topaz Labs, taking advantage of x86 architectural design features. There are also unconventional benchmarks that focused on Microsoft Office Applications while omitting popular, conventional benchmarks like GeekBench.
Except for battery life, which M1 won out marginally, Intel’s results showed the i7 processors beating out M1 in almost every way.
What the Results Really Say
Choosing between an i7-based or an M1-based computer comes down to which ecosystem the consumer prefers between Windows and macOS. Regardless of which processor outperforms the other in a given test, since Apple doesn’t sell M1, Intel doesn’t have to worry about engineers designing M1-based systems over its i7.
Do you find performance benchmarks useful when sourcing processors for a design project? Why or why not? Share your thoughts in the comments below.