The new highs and lows?

Posted by Jon Peddie on February 27th 2013 | Discuss
Tags:

New definitions are needed

The first quarter of 2013 has ushered in new levels for the high-end, and what might have been considered the low-end.

The high end currently belongs to Nvidia with its 2688 core Titan GPU, where does the low-end belong? Is it the new PS4 with integrated graphics and unified memory? Or is it an ARM Cortex A15-based Chromebook?

I think the bar has been lifted for them all. Nvidia is claiming its ARM thingie has the best graphics, and Qualcomm just can’t seem to agree with that. Nvidia thinks their high-end Titan is the killer machine (price-wise and performance-wise), AMD only agrees with half of that. Intel cleverly judges itself by itself and says they are 40× better than they used to be, time scales notwithstanding.

AMD and Intel decided about two years ago that Moore’s law favored integration. They had (and have) decades of data to prove it and from which to draw forecast curves The software suppliers, with the exception a handful of game developers have been their unwitting allies.

Over in the handheld world, more integration has been the best friend the SoC suppliers could have. Mobile applications have proliferated, they use all the horsepower the SoCs can deliver—and the demand for more is so far unabated.

The mobile segment is also integrating radios, which is a tricky proposition from two points of view. The CPU/GPU/DSP portions of a SoC like the Snapdragon 800 or Nvidia Tegra 4 move at a rapid yearly cadence. However, the modems move a four-year or longer cadence. Add to that the qualification period and you potentially end up with an integrated part that has three to five year old technology in it. A case in point is, how many “state of the art” SoCs are using the four year old ARM Cortex A9? I suspect the new Tegra 4i comes to mind; Nvidia’s first integrated SoC, and Qualcomm’s integrated Snapdragon S4 or 800 with an ARMv7. But the IP numbers don’t tell the story, they only specify the instruction set—and instruction sets don’t change that quickly, ever hear of x86?

So defining a part, by cores, clocks, processes, architecture, or die size is getting really difficult—no not “getting,” IS really difficult. It’s a little bit easier at the high-end, where you can use one or two benchmarks and count cores and clocks. But at the lower ranges, where the devices are used in diverse and multi-use applications, it’s all but impossible. Running one game on a mobile phone may give the semi supplier its 15-minutes of fame, but how many other apps is that phone used for? You’ve seen the list we published a while back.

A bewildering list of clever new products are being teed up for introduction at MWC in Barcelona. It’s impossible (for us at least) to sort them out in real time. At the same time, Nvidia introduced the Titan super GK110 GPU-based AIB and if we had one (it’s coming), it would take days to figure out its strengths. The GK110 is actually a bi-product of Nvidia’s Tesla efforts and can be found in the Titan super-duper computer (hence its name). As impressive as that is, we’re not sure yet how it translates to graphics performance—I said yet.

I’ve been told the mobile phone handset builders have a simple metric—performance per milliwatt. They chose the “performance” based on the applications they think buyers of the phone will use—it’s different for each handset builder. Then they evaluate on price. But the prices are so close it almost doesn’t matter. In PC Land it is benchmarks at the top, and price at the bottom—pretty simple (and to my way of thinking, simple-minded).

Trying to get a crossover of the mobile world evaluation parameters with the PC is like judging a fish on how well it flies, or a bird on how well it swims. Sure there are flying fish, and birds that swim, but that’s not their primary application.

I don’t think we can categorize products by their silicon metrics any more. I think we need a new set of user definitions, and it’s probably going to be large, and messy with lots of overlaps. Do you remember a few years ago that people used to carry two devices? They had a Blackberry for email, and a phone that was for, well calling and maybe music. Now all phones do all things, like a PC. Moreover, as long as the PC has been around, the best segmentation we’ve been able to come up with is good, better best? And, we’re all supposed to be so smart, and have so much (big) data. The only company that’s come even close to understanding PC users is Intel. Maybe Qualcomm has the best understanding of the mobile user. But as an industry we don’t have a common lexicon, so we don’t really know what a high is (or how high) or what a low is. Or at least I don’t, maybe you do.

Discuss this editorial