Spansion is trying to milk the attention that comes with supposedly green technologies for all its worth. If you think today's release about EcoRAM is familiar, that's because it is. In June, Spansion talked about its deal with startup Virident and a plan to bung shedloads of flash memory into servers in a bid to save power. Today's 'story' is touted as revealing the architecture.
It does nothing of the sort. The only thing I know now that I didn't before is that they need to put a memory controller into an x86 socket and that Linux rather than Windows Server is the target. I spent about 30 minutes on the phone yesterday on this one with Florian Bauch, who runs sales in Europe. The powers that be had not even told him what is going on with the EcoRAM.
First up, what makes sense about the plan. Since the Internet bubble popped in 2001, DRAM makers went on a go-slow. Up to that point, they had been predicting 4Gbit parts would be on sale in the middle of this decade. Right now, they are barely able to get 2Gbit parts into production.
Flash, thanks to the rise of media players and digital cameras, has surged ahead. Samsung has claimed it will soon be able to put 64Gbit devices into production. You get a lot more memory capacity for the same money. On top of that, flash consumes less power than DRAM because it does not need to be continually refreshed.
So, in principle, you can shove a load more memory into a server for less money and expect the machine to consume less power. Where's the catch?
Write performance on flash memory sucks. Really badly. You do not want to write to flash often. It's not just slow, the cells wear out. This is why flash disks contain their own processors that move data blocks around to stop dead zones forming in the memory array, or at least delay the point at which that happens.
The way round this problem if you are using flash as main memory is to simply try not to write to it too often. Spansion's idea is that companies like Google - which apparently is trying the system out - will put things like search indexes into the EcoRAM and use the regular DRAM for everything else. A system might have 128Gbyte of DRAM coupled with half a terabyte of EcoRAM.
The obvious question is: how does the system make sure that read-intensive data goes into EcoRAM? It was the among the first ones I asked. Unix knows the difference between code and data memory, so you could easily steer the pages used for programs into flash. But what about those index files? To an operating system, they look just like any other data.
The second question is: if you have a shadow memory controller occupying the socket a processor normally uses, how does it cooperate with the standard DRAM controller that is on the blade's chipset. Spansion is not saying but, as the first implementation will be for AMD processors, my guess is that they are using HyperTransport to shovel data in and out of the flash memory.
The third question is: if you do all this, how much juice do you actually save? When they first appeared, solid-state drives were meant to save scads of electricity compared with their evil, rotating-disk cousins. Unfortunately, many of the claims for SSDs were overblown and based on simplistic assumptions about the way in which flash memory consumes power. The power advantage of solid-state disks becomes more apparent in storage arrays, not in a blade's local hard drive. That is because, with the better read performance of flash, you don't have to gang up hundreds of almost-empty drives just to get decent I/O throughput.
I'd love to know the answers to those questions as they mark the difference between EcoRAM being a lame duck and something that really does save on server power. Actually, so would Bauch. Right now, I can't even tell you whether the memory managed by the EcoRAM controller looks like main memory to the processors on the blade or are simply some kind of glorified RAM-disk.
There is a paragraph in the release that suggests we are talking about a RAM-disk: "In terms of read latency, Spansion EcoRAM memory is expected to provide a read latency of 250 nanoseconds (ns) or less, compared to hard disk drives, which have far greater multi-millisecond latencies, or solid state drives which have multi-microsecond latencies. At the same time, write bandwidth capability of the architecture is projected at up to 300 MB/s, making it well-suited for read-intensive workloads in vertical market segments."
DRAM read latency is generally closer to 50ns and you get better speeds through pipelining than you typically get with flash. So, you may be in the position where, to get the performance, you have to keep copying pages from flash into DRAM and then back to disk when they get 'dirty' from too many writes. This implies that the system will be doing a lot more power-sapping copy operations than you might expect from the material released so far by either Spansion or Virident.
Trying to use the EcoRAM as directly addressable DRAM seems to be too fraught with complications to be the approach that Spansion and Virident are trying. But the RAM-disk option will not save as much power as doing a ground-up design that takes account of the difference between read- and write-intensive data. Transactional memory, a proposal for future multiprocessor systems, might help here. But that's still years away from commercialisation.