News

Nvidia and Starting the Next Age of Super Computing

“I believe that we need something big and new every four years or so.” – Jen Hsun Huang Nvidia has been planning to be in the super computer business for the past three years. The company has had stellar growth since the internet melt down in 2001, and it has come to dominate almost every market it has entered, but ...

Robert Dow

“I believe that we need something big and new every four years or so.” – Jen Hsun Huang

Nvidia has been planning to be in the super computer business for the past three years.

The company has had stellar growth since the internet melt down in 2001, and it has come to dominate almost every market it has entered, but Nvidia is now facing limited growth opportunities in its classical markets and new competition. Its main rival for graphics chips ATI has renewed itself with a winning and very challenging price/performance product design and positioning. Nvidia’s integrated chip business is declining (as it is for all integrated chip suppliers), and it will soon have heavy weight Intel competing in its mainstream desktop high-margin discrete GPU business. If ever the company was in a squeeze play, Nvidia is now.

However, one of Jen Hsun Huang’s key strengths is his long range vision. He shares that with investors, analysts, and press whenever the occasion allows and I’ve been surprised that some people continue to underestimate Huang’s advance planning. I was there 16 years ago when he had the vision for the gaming market, and I must confess I thought he was reaching a bit far but he systematically built for his vision and history has proved him right; and a lot of people have made money because of it.

When I examine the actions of the company over the past few years I see a clear line of logical investments and industry initiatives that leads directly to Fermi – Nvidia’s new super-computer processor. Others who see Fermi as a late to market, over-stuffed, and maybe underpowered graphics chip are in my opinion missing the point and may be missing the opportunity.
Yes, Fermi is late for the graphics market and Jen Hsun told an audience of press and analysts last week that he’s be much happier if he had a chip to go head to head with ATI but in the meantime he says Nvidia has been building a support system for Fermi’s launch. “There is pent up demand,” he said. “There are servers and computers being built to accommodate Fermi.” There aren’t very many companies, if any that can teach Nvidia how to build a graphics chip. No company in the world has the IP, personnel, hard won experience, or passion for graphics that Nvidia does. I say this as one who knows what’s going on inside all the graphics companies, as well as the work being done in universities and by developers. I’ve watched Nvidia closely for the past 16 years – they have the right stuff.

So if Nvidia hasn’t screwed up on graphics other than being late with Fermi, then what have they done?

When the industry went to unified shaders * with DirectX 10 in late 2006 Nvidia brought out the GeForce 8 series, and ATI had the Radeon HD 2000. Even little S3 had the Chrome 500, and Intel claimed the GMA x3000 could handle a little programmable shading. In fact, ATI actually had programmable shading back in 2004 inside the Xbox 360 and helped Microsoft write the spec for DirectX 10. So although Nvidia would like to take credit for inventing unified shaders, ATI was actually there ahead of them; however, ATI didn’t reveal (to me at the time at least) any notion of using it for anything more than an efficient graphics processor. (There’s more history to this involving 3Dlabs in 2003, but this is not an appropriate venue.)

If Ian Buck (developer of Brook at Stanford, now at Nvidia on CUDA) is credited with seeing the potential of GPU-computing then Nvidia is credited with getting it when Buck spoke – and lest we forget – Nvidia has very strong ties to Stanford and other universities.

I think that’s when the light went on in Jen Hsun’s head and his generals quickly caught on if they hadn’t already figured it out. And that’s when the following event began to unfold – consider this time line – and note – none of it has anything to with graphics.

Date

Event

Impact

May 2003 Introduces first programmable shader GPU The beginning of exposing the parallel processing capabilities of the GPU.
July 2003 Ian Buck introduces first GPU compute lecture The constructs for generalized GPU programming are exposed
March 2005 GPU Gems book released by Nvidia First book on how to program a GPU for parallel processing
January 2006 Acquires Stexar X86 architects and engineers formally with Intel brings Nvidia CPU know how.
April 2007 GeForce 8000 (G80) released First GPU with unified shaders – the basis of GPU compute
October 2006 CUDA launched Sets up programming environment for Nvidia GPUs, develops new C compliers, lays out middleware platform for computing environment
November 2006 Stanford lecture Ian Buck on how CUDA can solve compute intensive problems, and where GPU computing will be going in the future is discussed – wakes up industry and universities.
September 2007 Parallel programming classes First course taught at University Illinois. Creates a generation of CUDA programmers who will go into industry with that bias.
October 2007 University Illinois submits spec to DOE for super computer project Defines all the parameters needed for a new class of supercomputers, and they show up in Fermi
February 2008 Nvidia acquires Ageia Get’s physics software library for CUDA – direct competition to Intel’s Havok acquisition
February 2009 FORTRAN added to CUDA The language of scientific computing – now all old code can be threaded and re-complied.
September 2009 Nexus introduced for Visual Studio The first off-the-shelf implementation of development tools giving developers access to the CPUs and GPUs in a system.

I believe Fermi is the finalization of a long term plan by Nvidia to participate in a new very large market and offset a loss of growth from its traditional markets. And ironically (and strangely not played on by Nvidia) the company’s new Fermi product was introduced on Enrico Fermi’s birthday, September 29, 2009.

* GPUs have hundreds of 32-bit processors in them. These processors were developed to run small specialized programs that were used to make computer graphics images better. Initially, they performed pixel shading only. However, the term stuck and is used for other graphics pipeline stages now, and has come to mean “processor.”

In the early 2000s there were two types of processors in a graphics chip – the front end processor that became known as the vertex shader – it dealt with setting up the geometry of the model or scene. And there was a back-end processor known as a pixel shader. These two independent and similar processors were never in balance; one would be idle while the other was busy.

It was decided to have just one type of processor and apply as many of them as needed to the work at hand – vertex and/or pixel, and so that became the “unified shader.”