News

Intel vs. Arm: The Microsoft factor

Copilot+ is driving the AI PC right now, and Intel needs a processor that can support it.

David Harold

Intel’s Lunar Lake architecture competes with Arm in low-power computing, emphasizing efficiency and AI capabilities, but Arm is pushing new AI initiatives like Arm Compute Subsystems for Client. Software, particularly Microsoft’s Copilot+, will play a vital role in AI PC success. Intel highlights hardware readiness for Copilot+ but was absent from the launch. The market dynamics involve intense competition between Intel and Arm in the AI PC realm.

Computex
Michelle Johnston Holthaus, executive vice president and general manager of the Client Computing Group at Intel, answers questions about Copilot+ at Intel Tech Tour 2024. (Source: JPR)

What do we think?Intel is in a tough spot. Microsoft has raised the AI PC stakes with the NPU-focused Copilot+ demanding 40-plus NPU TOPS, and Intel does not have anything like that on its 8 million-plus Meteor Lake PCs shipped to date. Microsoft doesn’t seem to want to throw Intel a bone either, with launch messaging focusing on Qualcomm and Windows for Arm. Much will depend on whether the future Meteor Lake evolution, Arrow Lake, can deliver Copilot+ or if Lunar Lake must carry the weight, which will make the AI PC definition even more amorphous than it already is. It’s one thing to have tiered performance, but key software that just won’t run is relatively unheard of today and will be tough for Intel to position.

Lunar Lake is Intel’s answer for Copilot+ but not Microsoft’s

With Lunar Lake, Intel’s philosophy is clearly that the best defense is a strong offense. This is an impressive new architecture designed to redefine how x86 is regarded by taking the fight to Arm’s home turf: low-power, high-efficiency computing devices. It’s similar to Apple’s M3 or Snapdragon Elite X in design philosophy: all three aim to balance high performance with energy efficiency, crucial for mobile and portable devices. This involves integrating more efficient CPU and GPU cores with advanced AI and machine learning capabilities to enhance performance in tasks such as image processing, natural language processing, and more. Arguably, Apple is more aggressive than the other two on GPU, whereas all three have pushed for NPU performance and landed in a roughly similar place (38–48 TOPS).

Intel says the perception that Arm will always win in this ultra-mobile computing territory because of their more efficient RISC ISA is now wrong. Intel believes its innovations in power efficiency, which come from the architecture’s more efficient E-cores, and sophisticated use of fabrics, caches, etc. make it the winner.

Lunar Lake’s core performance comes not from clock frequency but from higher IPC (instructions per cycle), which delivers similar single-thread performance at half the power.

Meanwhile, Intel’s Thread Director helps the system focus on efficiency, not always bringing to bear the highest performance core for any given task, even if the task seems to demand it. The focus is on efficiency above raw performance.

Power balance is optimized per load type (and each core can be individually on or off, with four power rails). There are enhanced sleep states too and a machine learning-based workload classification and frequency control that dynamically adjusts processor speed. Basically, there are lots of design choices that address the advantages that Arm is perceived to get just from their ISA.

Intel
Michelle Johnston Holthaus (Source Intel)

“I don’t think there has ever been a time that is more exciting than today in the PC market,” said Michelle Johnston Holthaus, executive vice president and general manager of the Client Computing Group at Intel. And of course, she’s right. But the main reason she’s right is the rise of the Arm-based PC, which might account for 30% of the market by 2026, according to Canalys.

Arm is weak on AI, hence a strategic push into AI driven by Softbank, which may include acquiring assets from failing UK company Graphcore. Ahead of Computex, Arm launched Arm Compute Subsystems (CSS) for Client, which aims to meet the growing demand for efficient compute for AI. CSS combines the latest Arm compute IP with production-ready physical implementations on advanced process nodes. The goal is to enable partners to reduce development time and effort, but mostly to get better AI inferencing. CSS for Client claims to deliver more than 30% increase in compute and graphics performance, and a 59% improvement in AI inference speed, for AI.

While Arm is targeting mobile phones first and PCs second, the high-end AI PC is where Intel is pushing, with a strong NPU and GPU offering that offsets Arm’s CPU advantages. However, Arm has strong customers, with Qualcomm delivering its own 40-plusTOPS NPU Hexagon in Snapdragon X Elite.

Beyond hardware, the make or break for AI PC is likely to be software, and while Intel has 350 features in development with over 100 partners, the key AI software story today is about Microsoft’s Copilot+. Intel says they are hardware-ready for Copilot+, but the software story is weaker, with Microsoft prioritizing Qualcomm Arm support for Copilot+’s launch. Arguably, this is because Copilot+ won’t run on any other processor currently available, but I’m sure Intel would have provided some machines for the launch if Microsoft had wanted them. Intel is courting Microsoft in other arenas with Intel Neural Compressor contributed now to open-source ONNX, the default Windows neural compression API.

Lunar Lake delivers more than the minimum TOPS requirement for Copilot+ and will go to market in Q3 ’24, with Intel expecting Copilot+ downloads to be available for Lunar Lake PCs at Day One. Arm-based PCs with Copilot+ are available starting in June.

“Arm was first, but we will be in the market and will ship more,” said Holthaus during a media Q&A. What is more, she was adamant that “Microsoft needs us for scale and reach.”

In terms of which will perform better, Holthaus said, “Snapdragon X Elite is not out yet, and we think we are competitive based on public statements.”

Intel’s CPU Lead Architect Stephen Robinson was more evasive when asked to compare Lunar Lake with Snapdragon X Elite. “I can’t talk about any comparison,” he said. Ori Lempel, senior principal engineer for P-core at Intel, was clear that he believes x86 is not a big penalty right now compared to Arm. He said: “I don’t use CISC versus RISC, or Arm versus x86, as an excuse for my microarchitecture being less performant than others.”

When it comes to AI PC and Copilot+, the elephant in the room is the more than 8 million Meteor Lake-based and AI PC-branded devices shipped already. With 10 TOPS of NPU, they won’t handle Copilot+. Intel says Meteor Lake can do other things today, and they hope that as language models become more efficient, it might run more of them in the future. Plus, it can use the cloud for AI today, although that argument seemed very much against the “local processing is good” message delivered across the Lunar Lake launch.

“It is overstated to say every user needs a Copilot+ PC,” said Holthaus in conclusion to questions about Copilot+ and Meteor Lake, which seemed like an odd message at an AI PC platform launch event—and which we think will be arguable for a brief window. It is our expectation that AI PC will push PC refresh rates faster than anything since Wi-Fi.