News

Shaders—how much is enough?

Infinite for AI; for rendering, not as much.

Jon Peddie

In the evolution of computer graphics, “too much is not enough” was the rule. Now, we  may be reaching its limit. Jim Blinn’s observation, known as Blinn’s Law, that rendering time remains constant despite increasing performance, attributing this to growing scene complexity, is an example. Advances like larger polygon budgets, improved anti-aliasing, and the advent of ray tracing have enhanced image quality over time. Is more needed?

Graphic Cards

Back a few years, I used to enjoy saying that in computer graphics, too much is not enough. Maybe today it is enough. CG was a black hole, and it would take however much memory, bandwidth, or FLOPS you could throw at it—and ask for more. But how many shaders are really needed to render a photorealistic, high-resolution, high frame rate image?

One aspect of that situation was Jim Blinn’s observation that rendering time doesn’t seem to change, no matter how much performance you apply—that became known as Blinn’s Law. The manifestation of it was scene complexity and, therefore, quality increased with added performance, and as a result, the overall rendering time stayed the same. Game developers and artists got bigger polygon budgets, chunky images got smoother, anti-aliasing looked better, and ray tracing became possible. At our inaugural Siggraph luncheon in 2002, we posed the question, Are we done yet? The answer from our panel of experts was a resounding no. And so the thinking had been without question.

But we are done now, and if not, we could be at the sigma three point of the asymptote of diminishing returns. The Nvidia GeForce RTX 4090 has 16,384 shaders (which it calls CUDA cores). When Nvidia introduced the first integrated GPU, the GeForce 256, in 1999, it had four pipelines, which we called shaders then. Moore’s law worked over the past 25 years with shader count doubling approximately every other year. That would project the 5090 to have over 33,000 shaders.

Along with the increase in shaders, the screen size, resolution, frame rate, and color depth also increased, but not at the same rate. Memory density followed Moore’s law, and the cost per bit of RAM and storage decreased rapidly, making it economical to load big workloads into a GPU.

AI is still a black hole and will take all the resources it can get. But in CG, games and cinema, is it really the same? Would an infinite number of shaders get an image rendered infinitely faster?

A simple conclusion escapes us. A leading game developer told me that shader performance still limits many aspects of rendering, such as the ability to achieve a high frame rate at a high resolution, with geometry, lighting, and subsurface scattering detail as precise as your eyes can see. We could easily consume another 10× shader performance without “running out of improvements.”

A few details are limited not by performance but by our lack of sufficiently powerful algorithms, such as rendering faces, facial animation, body animation, character dialogue, and other problems that simulate human intelligence, emotion, and locomotion. Even if you gave us an infinitely fast GPU today, we still couldn’t create humans that were indistinguishable from reality.

Compared to past decades when we had no clear idea of how we’d ever solve those problems, nowadays we generally think that AI trained on vast amounts of human interaction data (which Epic will only ever do under proper license!) will be able to bridge the gap to absolute realism, possibly even by the end of the decade. I’m not sure how much GPU performance that will require. Perhaps not more than today’s GPUs.

From the M&E world, I was told about a few trends that have caught my attention—the democratization of AI chips (possibly getting more and more powerful on devices) and NeRFs (neural radiance field, a method based on deep learning for reconstructing a three-dimensional representation of a scene from sparse two-dimensional images).

My M&E friend wondered if traditional shading and rendering won’t be replaced by advanced AI-assisted techniques leveraging these technologies even in cinema and games, but doubted it will be anytime soon. “There is a lot of room to improve render performance in the interim—maybe not infinitely but certainly by orders of magnitude,” he said. His guess was that you will still need to limit the problem space through optimizations rather than brute-forcing it—which suggests that you need to be able to control what gets rendered and how.

One of my benchmark developer friends was more pragmatic. “At some point, yes,” he said. There’s always a bottleneck somewhere in the system. Sometimes, it’s the memory bus, sometimes the CPU, and sometimes the GPU. One needs to design a balanced system.

So, it looks like too much is still not enough for a while longer, and, as in 2002, we’re not done yet.