News

Intel betting on ROI and TCO to win in AI

Data center operators are throwing money today at AI infrastructure—that will stop.

Jon Peddie

Intel is shifting its AI strategy to target cost-conscious users, rather than competing directly with Nvidia. Hyperscalers invest heavily in AI infrastructure, but most businesses require more targeted solutions. Intel’s Anil Nanduri says companies are reassessing massive AI models, opting for smaller, task-based ones with lower performance needs. Intel’s Gaudi 3 accelerator enables economical systems for enterprises, leveraging open-source models and software.

Data Center
(Source: JPR)

Intel has said it doesn’t see a way or reason to compete with Nvidia on a TOPS-to-TOPS basis and feels its path is to go after the more cost-conscious users.

Hyperscalers like Meta, Microsoft, Oracle, and X are investing heavily in AI data center infrastructure, prioritizing cutting-edge capabilities over immediate profitability. OpenAI, for example, expects $5 billion in losses on $3.6 billion in revenue. However, most businesses cannot afford such investments. They require more targeted AI solutions tailored to their specific needs.

According to Anil Nanduri, head of Intel’s AI acceleration, companies are reassessing the value of massive AI models. Many will opt for smaller, task-based models with lower performance requirements, rather than relying on a single, all-encompassing model. This shift reflects a growing focus on practicality and return on investment.

Dylan Martin of CRN wrote an outstanding article on Nvidia outrunning Intel and quotes Nanduri as saying, “The world we are starting to see is people are questioning the [return on investment], the cost, the power and everything else. This is where—I don’t have a crystal ball—but the way we think about it is, do you want one giant model that knows it all?”

Intel believes the answer is “no” for many businesses and that they will instead opt for smaller, task-based models that have lighter performance needs.

Nanduri said that while Gaudi 3 is “not catching up” to Nvidia’s latest GPU from a head-to-head performance perspective, the accelerator chip is well-suited to enable economical systems for running task-based models and open-source models on behalf of enterprises, which is where the company has “traditional strengths.”

Rohit Badlaney, general manager of IBM Cloud product and industry platforms, told CRN the company had tested Intel’s “price-performance advantage” claims for Gaudi 3 and was impressed with what they found.

Intel also hopes to gain an advantage through its open software stack and criticizes proprietary stacks like Nvidia’s.

So, today’s AI growth will go mostly to Nvidia and a bit for latecomer AMD. Intel will fish in smaller ponds until the big guys’s investors start questioning the CapEx costs.