Jon Peddie Blogs

Nvidia and Starting the Next Age of Super Computing

Posted by Jon Peddie on October 7th 2009 | Discuss
Categories: Blogs, Engineering and Development
Tags: nvidia opencl directx cuda fermi compute

“I believe that we need something big and new every four years or so.” – Jen Hsun Huang

Nvidia has been planning to be in the super computer business for the past three years.

The company has had stellar growth since the internet melt down in 2001, and it has come to dominate almost every market it has entered, but Nvidia is now facing limited growth opportunities in its classical markets and new competition. Its main rival for graphics chips ATI has renewed itself with a winning and very challenging price/performance product design and positioning. Nvidia’s integrated chip business is declining (as it is for all integrated chip suppliers), and it will soon have heavy weight Intel competing in its mainstream desktop high-margin discrete GPU business. If ever the company was in a squeeze play, Nvidia is now.

However, one of Jen Hsun Huang’s key strengths is his long range vision. He shares that with investors, analysts, and press whenever the occasion allows and I’ve been surprised that some people continue to underestimate Huang’s advance planning.  I was there 16 years ago when he had the vision for the gaming market, and I must confess I thought he was reaching a bit far but he systematically built for his vision and history has proved him right; and a lot of people have made money because of it.

When I examine the actions of the company over the past few years I see a clear line of logical investments and industry initiatives that leads directly to Fermi – Nvidia’s new super-computer processor. Others who see Fermi as a late to market, over-stuffed, and maybe underpowered graphics chip are in my opinion missing the point and may be missing the opportunity.
Yes, Fermi is late for the graphics market and Jen Hsun told an audience of press and analysts last week that he’s be much happier if he had a chip to go head to head with ATI but in the meantime he says Nvidia has been building a support system for Fermi’s launch. “There is pent up demand,” he said. “There are servers and computers being built to accommodate Fermi.” There aren’t very many companies, if any that can teach Nvidia how to build a graphics chip. No company in the world has the IP, personnel, hard won experience, or passion for graphics that Nvidia does. I say this as one who knows what’s going on inside all the graphics companies, as well as the work being done in universities and by developers. I’ve watched Nvidia closely for the past 16 years – they have the right stuff.

So if Nvidia hasn’t screwed up on graphics other than being late with Fermi, then what have they done?

When the industry went to unified shaders * with DirectX 10 in late 2006 Nvidia brought out the GeForce 8 series, and ATI had the Radeon HD 2000. Even little S3 had the Chrome 500, and Intel claimed the GMA x3000 could handle a little programmable shading. In fact, ATI actually had programmable shading back in 2004 inside the Xbox 360 and helped Microsoft write the spec for DirectX 10. So although Nvidia would like to take credit for inventing unified shaders, ATI was actually there ahead of them; however, ATI didn’t reveal (to me at the time at least) any notion of using it for anything more than an efficient graphics processor. (There’s more history to this involving 3Dlabs in 2003, but this is not an appropriate venue.)

If Ian Buck (developer of Brook at Stanford, now at Nvidia on CUDA) is credited with seeing the potential of GPU-computing then Nvidia is credited with getting it when Buck spoke - and lest we forget – Nvidia has very strong ties to Stanford and other universities.

I think that’s when the light went on in Jen Hsun’s head and his generals quickly caught on if they hadn’t already figured it out. And that’s when the following event began to unfold – consider this time line - and note – none of it has anything to with graphics.

Date

Event

Impact

May 2003 Introduces first programmable shader GPU The beginning of exposing the parallel processing capabilities of the GPU.
July 2003 Ian Buck introduces first GPU compute lecture The constructs for generalized GPU programming are exposed
March 2005 GPU Gems book released by Nvidia First book on how to program a GPU for parallel processing
January 2006 Acquires Stexar X86 architects and engineers formally with Intel brings Nvidia CPU know how.
April 2007 GeForce 8000 (G80) released First GPU with unified shaders – the basis of GPU compute
October 2006 CUDA launched Sets up programming environment for Nvidia GPUs, develops new C compliers, lays out middleware platform for computing environment
November 2006 Stanford lecture Ian Buck on how CUDA can solve compute intensive problems, and where GPU computing will be going in the future is discussed – wakes up industry and universities.
September 2007 Parallel programming classes First course taught at University Illinois. Creates a generation of CUDA programmers who will go into industry with that bias.
October 2007 University Illinois submits spec to DOE for super computer project Defines all the parameters needed for a new class of supercomputers, and they show up in Fermi
February 2008 Nvidia acquires Ageia Get’s physics software library for CUDA – direct competition to Intel’s Havok acquisition
February 2009 FORTRAN added to CUDA The language of scientific computing – now all old code can be threaded and re-complied.
September 2009 Nexus introduced for Visual Studio The first off-the-shelf implementation of development tools giving developers access to the CPUs and GPUs in a system.

I believe Fermi is the finalization of a long term plan by Nvidia to participate in a new very large market and offset a loss of growth from its traditional markets. And ironically (and strangely not played on by Nvidia) the company’s new Fermi product was introduced on Enrico Fermi’s birthday, September 29, 2009.

* GPUs have hundreds of 32-bit processors in them. These processors were developed to run small specialized programs that were used to make computer graphics images better. Initially, they performed pixel shading only. However, the term stuck and is used for other graphics pipeline stages now, and has come to mean “processor.”

In the early 2000s there were two types of processors in a graphics chip – the front end processor that became known as the vertex shader - it dealt with setting up the geometry of the model or scene. And there was a back-end processor known as a pixel shader. These two independent and similar processors were never in balance; one would be idle while the other was busy.

It was decided to have just one type of processor and apply as many of them as needed to the work at hand – vertex and/or pixel, and so that became the “unified shader.”

Discuss this entry

I couldn’t agree more that NVidia has set its sights on the supercomputing market, at least on par with the graphics/gaming market.  The Fermi chip has spectacular double precision performance, and a number of other less obvious features, that will make it the obvious supercomputing platform for the next few years.

The bus that connects Fermi to the processor will not have the bandwidth integrated CPU/GPU systems from ATI and Intel, but on the other hand the dedicated memory on the graphics card will have tremendous bandwidth and low latency.

The CUDA and now OpenCL tools, and the debuggers that NVidia is putting out, should make GPU programming less of a black art—and more something that will be maintainable into the future.

Finally, I think that NVidia killed SGI by being more nimble and accessible.  SGI tried to survive by moving into supercomputing, but with gigantic expensive bespoke machines.  NVidia has seen it all before, and they know the dangers—commodity-scale supercomputing might be the answer.

By Thad Beier on 2009 10 07

Hi Jon, here are my suggestions for the timeline

1) May 2002 - Mark Harris launches GPGPU.org as a indirect consequence of GeForce 4 and Radeon 9700 launches and the possibility to start using GPU as a compute device. G80 i.e. GeForce 8 debuted on November 8th, 2006.
2) February 2003 - TU Wien puts GPGPU as a small part of their Graphics curriculum. To my knowledge, this is the first time GPGPU was mentioned in a university.
3) November 2006 - nVidia introduces GeForce 8 [not April 2007].

Personally, I consider GPGPU to have the same effect in the world of supercomputing as x86 had on supercomputing market while it was dominated by propriatery RISC architectures. It doesn’t matter who the vendor is [even though if we compare all the three players and their plans and infrastructure, the writing is on the wall], what matters is that GPGPU chips are the solution to Gene Amdahl’s Law. And that is what the supercomputing industry needs.

The world is shaping up to be a very different beast than the one we predict. Four billion ARM-based chips were shipped in 2008, what happens if we add the GPGPU-compliant core and just let the graphics chip to become a self-sustained unit…

Many challenges lie ahead, but as you wrote - who gets the bigger picture will get results.

BR,

Theo

By Theo Valich on 2009 10 11

I hope the PS4 and Xbox 720 become SUPER COMPUTERS!!
INCREDIBLE MOTION-TECH!! UNIMAGINABLE CAMERA (Like PS Eye and Natal), FABULOUS GAMES! (PS3 currently contains most of em, a few of em by Microsoft but the Wii are RUBBISH!!!)
Incredible HDD (6.5 TB (5 Hologram Discs), INCREDIBLE RAM (200-300GB), INCREDIBLE Vibrations, INCREDIBLE MEDIA (720= Optical Hologram Discs, PS4=Blu-rayed Hologram Discs), INCREDIBLE UHD OLED COMPATIBILITY (BEYOND 1080p, BEYOND 200Hz Motion Flow Frame Rate, BEST COLOUR VIBRANCY EVER AND MORE!!!)

Actually Im becoming a bit carried away, I best stop typin, this is completely outta topic LOL! = P

By BalramRules on 2009 10 28