News

x86 isn’t do-or-die for Nvidia

...but pass on x86, and the company better execute flawlessly on its GPU strategies and technologies

Intel's Sandy Bridge and AMD's Fusion are right around the corner, and the die-integrated combinations of x86 CPU and GPU have heightened the interest about how one very specific company will navigate this new landscape of SoCs. Nvidia, the dominant provider of discrete GPUs over the past decade, cannot make the same leap in silicon integration that its two key rivals will imminently launch. The simple reason: it has no x86 IP in its technology arsenal.

Robert Dow

…but pass on x86, and the company better execute flawlessly on its GPU strategies and technologies

Intel’s Sandy Bridge and AMD’s Fusion are right around the corner, and the die-integrated combinations of x86 CPU and GPU have heightened the interest about how one very specific company will navigate this new landscape of SoCs. Nvidia, the dominant provider of discrete GPUs over the past decade, cannot make the same leap in silicon integration that its two key rivals will imminently launch. The simple reason: it has no x86 IP in its technology arsenal.

The easy prediction made by many already is that Nvidia has no choice but somehow get into the x86 game, or begin a slow fade to irrelevance as this new class of integrated CPU+GPU drives Nvidia’s bread-and-butter discrete GPU to extinction, or at least to a niche role. It’s often easy to tear down knee-jerk, off-the-cuff forecasts, based on oversimplified views of what beneath the surface might be a far more complex issue. But in this case, the argument isn’t so easy to dismiss.

Because with no change to the computing status quo, then I’d likely come up with the same conclusion. But things will change, dramatically so, and Nvidia’s going to be rallying its forces to disrupt the norms of computing technology and usage as much as any other vendor. And all the players with a stake in tomorrow’s computing dollars are going to have to pay close attention and adapt, not just Nvidia.

No question, today’s mainstream computing status quo — where a CPU is mandatory, but a GPU isn’t — will certainly raise the pressure on an x86-less vendor like Nvidia, one which can’t directly compete with the CPU+GPU hybrids from AMD and Intel. At least publicly, Nvidia doesn’t acknowledge it ever will or wants to build and market device integrating x86. Of course, I’ve no doubt that if there had been a reasonable path for Nvidia to tread — leading to a proven, high-performance, and court-legal x86 core — they’d have taken it long ago and have that core in hand today. The reality, however, is that it doesn’t, and pushing forward without x86 will make the company’s job tougher.

But Nvidia’s not simply standing by, waiting for the status quo to either put a deeper dent in discrete GPUs or knock the company of its well-earned perch. Because even if Wall Street accepted that fate, we can’t imagine Nvidia’s CEO Jen-Hsun Huang would. So the one thing we know for sure is that Nvidia won’t be a bystander in this still unfolding story. None of these developments catching Nvidia by surprise, as the company long ago saw the writing on the wall — that in order to secure a seat at the table of high-volume silicon suppliers in the future, its opportunities would be more limited without x86. And that it would need to adapt.

So it’s aggressively pursuing at least three avenues to make sure its x86-less IP can maintain broad-based demand, in whichever device ends up dominating the mainstream computing landscape down the road. With its Tegra line, the company found a way to build integrated SoCs for the high-volume handheld market without x86. Smartphones don’t need x86; ARM will work fine, thanks. Not surprisingly, the company’s very bullish on the prospects for the smartphone, evolving to exploit ever-more-powerful on-chip resources while tapping the limitless compute capacity of the “cloud”. The combination is what the company envisions will be the device capable of displacing the Wintel PC (or Mac) as the primary “personal computer” most of us will rely on.

And then there there’s the Windows-on-ARM wild-card, perhaps a long-shot and at the very best, an avenue that won’t be here in the short-term. It’s an idea that’s been raised in the past in the context of handhelds, but more recently — and in a broader context — by Rick Merritt of EE Times Should Windows 8 (in some form) appear on ARM, Nvidia’s playing field for Tegra on PCs opens wide up, and the lack of x86 IP suddenly becomes a very minor and quickly forgotten issue, as the company targets the emerging range of PC-type devices that fit between phones and traditional PCs.

But there are other ways to battle further incursion from integrated graphics hardware on the platform that for now most of us still consider principal, the x86-based PC. First and foremost, keep building discrete GPUs that provide performance well above and beyond CPU-integrated implementations. The company surely can, because without the tight limits on die area, power consumption and thermal constraints, they can deliver what an integrated can’t match: rendering performance that hard-core gamers and workstation professionals continue to demand and budget the extra dollars for.

Which brings us to Nvidia’s third, and perhaps most critical technology initiative for the future, and that’s to have discrete GPUs perceived as more than just graphics devices, but multi-purpose accelerators — ideally accelerators that are perceived as indispensable as CPUs. Nvidia’s admirable campaign in GPU computing (a.k.a. GPGPU, general purpose computing on GPUs) has increased the value proposition for its discretes tremendously in workstation and supercomputing spaces, multiplying performance not by single digits but by tens and hundreds for key applications like Ansys, 3ds max, Landmark Geoprobe and MatLab. But as impressive as the results of its campaign have been, success in those spaces are not going to be enough to keep discrete GPU volume at the levels where Nvidia (or its investors) will want it, at least not in the long-term.

It’s on the right track with a lot of momentum in GPU computing, and the advantages for academic, research, medical, geoscience and a host of other high-performance computing applications on GPUs have been proven. But if Nvidia wants to ensure a high-volume home for its core products, it needs to up its products’ value proposition for high-volume graphics markets, by taking GPU computing to the masses. To continue to justify itself in add-in slots and motherboards in mainstream computers of the future — particularly in the now-starting era of integrated CPU+GPU — it’s going to need the consumer and corporate killer-apps, not just the medical research ones.

It appears to recognize as much, as from the beginning of Nvidia’s more conscious, public push into GPU computing, it’s highlighted applications like Elemental’s Badaboom. One of the early applications Nvidia showed off, Badaboom exploits GPUs to accelerate video transcode, a task more and more consumers are finding themselves demanding in the age of new sources of video (Internet, Netflix, DVRs) and a host of mobile devices to play them on (smartphones, iPods, tablets). Moving forward, Nvidia needs more Badabooms, preferably a whole bunch more. And at Nvidia’s Graphics Technology Conference in September, we saw examples of GPU power applied to a host of compute problems, several of which we see potential “killer app” type appeal.

Skip the x86 if it makes sense, Nvidia, and we know a lot of reasons why the decision might make sense — risk, cost and legalities included. But if that’s the real tack, make sure to keep the foot on the floor when it comes to GPU computing, and cover the every-day compute-intensive tasks as well as the esoteric. Integrated GPUs (wherever they’re integrated) will continue to serve the graphics needs of a bigger and bigger percentage of the computing masses, while x86 CPUs will remain mandatory to run today’s killer apps, most notably Windows, Office, browsers, email and the like. But turn the tables, and make GPUs mandatory to run a new breed of killer apps — just a few will do — and you’ve engineered a dramatic shift in the status quo.

Because if you don’t, in the long-term (and I’m certainly not going to stick my neck out on dates here), the volume represented by hard-core gamers and workstation users alone won’t be enough to keep up the long-term economy-of-scale necessary to fund the multi-billion dollar generations of discrete GPU necessary to keep up with the Joneses. In the age now starting with Fusion and Sandy Bridge, neither AMD nor Intel will lose graphics chip economy-of-scale. And to effectively complete long-term, Nvidia can’t afford to lose it, either.

Nvidia might be the company most under the gun to adapt to this a new era of integration, but it’s not the only major silicon supplier that’s going to be feeling the pressure. Intel and AMD aren’t exactly in the clear, either. Intel’s got to once and for all establish itself as a premier supplier of graphics IP, or risk giving AMD a big leg up in mainstream computing platforms. And as far as AMD is concerned, it risks putting itself in the very unattractive position of competing with an arch-rival’s manufacturing capability that’s always one generation ahead, unless GlobalFoundries can pick up its pace and close the gap.

Then again, stepping back big picture style, maybe things aren’t really so different than for so many of the technology crossroads of times past. This step forward in silicon integration has one very important overriding characteristic in common with the countless previous improvements in integration, simply that the motivation is always the same: reduce cost, reduce footprint and reduce power. Though the specifics of this transition may differ, in the end it’s a similar story: advancements in technology create inflection points players in the industry must face, and those vendors that can anticipate and adapt will survive. And those that can’t, won’t.