You are here

SOI Embedded DRAM Boosts Performance of IBM Multi-Core Processors

At the International Solid State Circuits Conference being held in San Francisco this week, IBM Corp. is announcing an on-chip memory technology that it claims has the fastest access times ever recorded for embedded DRAM. IBM expects this technology to be a major step forward in solving the multi-core processor memory bottleneck.

IBM said its solution entails swapping out most of the embedded SRAM cache used to store information directly on computer chips and replacing the SRAM cells with embedded DRAM designed with silicon-on-insulator (SOI) technology. IBM says the technology effectively doubles microprocessor performance beyond what classical scaling alone can achieve and vastly improves microprocessor performance in multi-core designs and graphics applications.

IBM’s paper describes a 65nm prototype embedded DRAM with a latency of 1.5 ns and a cycle time of 2 ns. This memory performance is an order of magnitude faster than today's DRAMs and competitive with embedded SRAM that is typically used for microprocessor cache memory.

While SRAM uses six transistors per bit, DRAM uses one transistor and one capacitor. A typical commercial microprocessor such as Intel Corp.'s Core 2 Duo devotes can devote more than 60% of its surface area to memory. Replacing that SRAM cache with DRAM cells taking up just one-third as much space will allow chip designers to either build smaller chips and reduce the "run lengths” of the interconnects that data must travel as it commutes around the chip, or else to increase the amount of on-die memory by up to three times the density.

IBM is obviously choosing the path of increasing the on-chip memory rather than aiming for a lower die size at the current performance level. Subramanian Iyer, Development Director, IBM, said, ""IBM is effectively doubling microprocessor performance beyond what classical scaling alone can achieve…For years, chipmakers have used SRAM on processors, but as chips grow smaller, SRAM is having a hard time keeping up. Problems are cropping up with current leakage and designers would like to use eDRAM, which requires fewer transistors and is less leaky."

``A lot of people have been trying to do this,'' added Lisa Su, vice president for semiconductor research and development at IBM. ``As we look into the processor roadmap, this is one of the most difficult things to solve. We were basically memory-limited in the high-power processors, so this has been very significant for us.''

Random access memory speed is one of the biggest bottlenecks in system designs, and one of the most persistent architectural considerations is related to a cache miss. When the CPU tries to access data outside its local cache, it has to wait for that information to come from system-level memory. The CPU can spend a huge fraction of its execution time retrieving that information, and in some cases a cache miss costs as much as 200 cycles.

As a relative measurement of the amount of embedded memory currently being used, IBM's upcoming Power6 CPUs use 8 Mbytes SRAM cache and Intel Corp.'s Itanium processors use as much as 18 Mbytes.

IBM’s announced method for tripling the amount of memory on a microprocessor, potentially doubling its performance. IBM believes it can put as much as 48 Mbytes of SOI embedded DRAM on a reasonably sized CPU when its 45nm technology becomes available in 2008. To put 24-36 Mbytes of memory on a processor using today’s technology, “you would need a 600mm-squared die today. Using this (announced) technology you could put that much memory on a 300-350mm-squared die," said Subramanian Iyer.

"Processors are definitely cache starved, and as you go more towards multi-core processors, the need for memory integration becomes more acute," Iyer said. "There are some server chips that could not be made without this technology…Our entire processor road map is based on SOI," he added.

Semico Spin

From the memory perspective, this is a very significant announcement. Here we have a major breakthrough in microprocessor performance enabled by a shift to a different embedded memory technology. While the new memory technology itself is only possible because of the conversion to SOI, little of the performance increase results from shifting the fundamental logic circuitry into SOI. Most of the performance gain is from the substantial increase in the density of the embedded memory while sacrificing little of the previously embedded SRAM access speeds.

This increase in processor performance based on a combination of SOI and a new embedded memory architecture is very similar to AMD’s conversion to SOI and their subsequent licensing of Innovative Silicon’s embedded SOI memory that we described in the Spin in January of last year. It is also worth noting that AMD was also IBM’s design partner in this recent announcement from IBM, although IBM’s variation of SOI embedded DRAM memory is a different configuration from Innovative Silicon’s design.

Semico’s opinion of this latest announcement of embedded SOI memory is that we are now clearly in a new phase of memory development. Having had a ringside seat in the early 90’s at a memory company as processor companies abandoned discrete SRAM cache and moved to embedded cache, my first reaction is to ask what has changed. What cost/performance consideration drives this new architectural change?

The significance of the IBM announcement is that it indicates a fundamental shift in the value proposition of memory technologies. The shift to embedded SRAM cache in the early 90’s was technically possible because of the similarities in the CMOS SRAM process and the CMOS logic process, and the increase in die size was justifiable because of the resulting performance advantages gained by the single core processor architecture.

However as the performance requirements for processors has increased, we have now reached the point at which over 60% of the surface area of a processor is dedicated to embedded SRAM cells. And the growing size—not to mention the increasing architectural complexity—of the memory core becomes more difficult to manage as we increase the number of processor cores.

From a memory prospective, IBM’s announcement confirms that the value proposition of the memory technology is shifting again. While the AMD partnership with Innovative Silicon’s SOI memory only suggested at this theme, the IBM announcement makes the point more forcefully—the performance of the memory is such a critical element in the processor’s overall performance that the efficiency of the memory can become the primary consideration in selecting the manufacturing process for the microprocessor!

Semico has been recently invited to sponsor a memory conference to explore this theme of the changing value proposition for memory technologies and to also focus attention on more of the new memory technologies. For more information on this conference, check the website at MemoryWorld.

Semico has two SOI reports on this topic. If have any questions please contact Mike Caldwell at 866-473-6426.

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.

Twitter