Larrabee: destined to fail and still be the future

5 August 2008

Larrabee, Intel's graphics processor that wants to be a general-purpose processor, looks programmed to fail. From what Intel has said so far, the compromises made in favour of running regular code will make this thing look over-priced and under-resourced for games software. The chances are that it will be like the early days of the Sony Playstation 2: it didn't look great because games weren't written to its strengths until somewhat later.

Long term, an underperforming Larrabee may not be enough to preserve the graphics processor market for nVidia. Larrabee's architecture is a hint of what a future x86 processor will look like. If Intel gets its way, much of the processing that goes on today in a dedicated GPU will be sucked into the core processor. That Intel decided to go with plonking down loads of reworked Pentium cores onto the Larrabee die rather than a new processor indicates that the company has its eye on the future of the x86 rather than today's GPU market.

Intel has concentrated on programming ease by giving each processor regular level-one and level-two caches instead of software-managed local memories. Caches are good for programmers but they can get in the way when the emphasis is on making the most of memory bandwidth with graphics-intensive work. And memory bandwidth is precious on a GPU, particularly as power consumption becomes more of an issue.

In the short term, it seems that Intel has more of an eye on the people who have turned to GPUs to accelerate big applications than the gaming community:

"The Larrabee architecture fully supports IEEE standards for single and double precision floating-point arithmetic. Support for these standards is a pre-requisite for many types of tasks including financial applications."

The approach taken by Intel fits a similar pattern to the i860, a processor from the early 1990s that started life as an experiment and became, briefly, one of the most popular processors in parallel supercomputers and high-end 3D graphics rendering. It turned up in the NeXT workstation and the Silicon Graphics Onyx Reality Engine.

Intel has never been that successful with discrete graphics. And it has had a few goes. The purchase of Chips & Technologies in the late 1990s did not go anywhere - it was at that time that ATI and nVidia came to the fore, seeing off Intel and companies such as 3DFX. The only way that Intel has made inroads is by integrating graphics into its chipsets. Even then, a lot of standalone GPU sales are into PCs with integrated graphics.

Over time, however, software wins. The GPUs will gradually acquire more of the features of host processors. And Intel will have enough spare transistors come the 22nm generation to pull something like a Larrabee onto the main die, coupling it with maybe four or eight big processor cores in an asymmetric multiprocessor. In the next few years, Intel gets to work out how the two bits would talk to each other, so you could regard the design of Larrabee as an architectural shift.

You could then strip the GPU down to its essential parts: providing hardware for the things that are just too expensive to do in software. What hardware goes onto the standalone GPU will depend on whether Intel is right in thinking ray-tracing will take over or, more likely, games programmers will use hybrids of today's techniques and more processor-intensive algorithms. Other programmers can use interfaces such as OpenCL to make use of the extra cores if they don't want to program them directly.

In this environment, Intel does not need to be successful in graphics per se. It simply has to be able to convince graphics programmers that their long-term future lies in much more software-intensive rendering. AMD could follow a similar path having bought ATI. If it does, and splits the GPU in two, then Intel's job gets easier. And nVidia's task is much tougher.