Fully Buffered DIMM

From HandWiki
Memory controller with differential serial connections to DDR2 FB-DIMMs. The AMB is visible in the center of each DIMM.

A Fully Buffered DIMM (FB-DIMM) is a type of memory module used in computer systems. It is designed to improve memory performance and capacity by allowing multiple memory modules to be each connected to the memory controller using a serial interface, rather than a parallel one. Unlike the parallel bus architecture of traditional DRAMs, an FB-DIMM has a serial interface between the memory controller and the advanced memory buffer (AMB). Conventionally, data lines from the memory controller have to be connected to data lines in every DRAM module, i.e. via multidrop buses. As the memory width increases together with the access speed, the signal degrades at the interface between the bus and the device. This limits the speed and memory density, so FB-DIMMs take a different approach to solve the problem.

240-pin DDR2 FB-DIMMs are neither mechanically nor electrically compatible with conventional 240-pin DDR2 DIMMs. As a result, those two DIMM types are notched differently to prevent using the wrong one.

As with nearly all RAM specifications, the FB-DIMM specification was published by JEDEC.

Technology

FB-DIMM DDR2 vs DIMM DDR2

Fully buffered DIMM architecture introduces an advanced memory buffer (AMB) between the memory controller and the memory module. Unlike the parallel bus architecture of traditional DRAMs, an FB-DIMM has a serial interface between the memory controller and the AMB. This enables an increase to the width of the memory without increasing the pin count of the memory controller beyond a feasible level. With this architecture, the memory controller does not write to the memory module directly; rather it is done via the AMB. AMB can thus compensate for signal deterioration by buffering and resending the signal.

The AMB can also offer error correction, without imposing any additional overhead on the processor or the system's memory controller. It can also use the Bit Lane Failover Correction feature to identify bad data paths and remove them from operation, which dramatically reduces command/address errors. Also, since reads and writes are buffered, they can be done in parallel by the memory controller. This allows simpler interconnects, and (in theory) hardware-agnostic memory controller chips (such as DDR2 and DDR3) that can be used interchangeably.

The downsides to this approach are; it introduces latency to the memory request, it requires additional power consumption for the buffer chips, and current implementations create a memory write bus significantly narrower than the memory read bus. This means workloads that use many writes (such as high-performance computing) will be significantly slowed. However, this slowdown is nowhere near as bad as not having enough memory capacity to avoid using significant amounts of virtual memory, so workloads that use extreme amounts of memory in irregular patterns might be helped by using fully buffered DIMMs.[citation needed]

Protocol

The JEDEC standard JESD206 defines the protocol, and JESD82-20 defines the AMB interface to DDR2 memory. The protocol is more generally described in many other places.[1][2][3][4][5] The FB-DIMM channel consists of 14 "northbound" bit lanes carrying data from memory to the processor and 10 "southbound" bit lanes carrying commands and data from the processor to memory. Each bit is carried over a differential pair, clocked at 12 times the basic memory clock rate, 6 times the double-pumped data rate. E.g. for DDR2-667 DRAM chips, the channel would operate at 4000 MHz. Every 12 cycles constitute one frame, 168 bits northbound and 120 bits southbound.

One northbound frame carries 144 data bits, the amount of data produced by a 72-bit wide DDR SDRAM array in that time, and 24 bits of CRC for error detection. There is no header information, although unused frames include a deliberately invalid CRC.

One southbound frame carries 98 payload bits and 22 CRC bits. Two payload bits are a frame type, and 24 bits are a command. The remaining 72 bits may be either (depending on the frame type), 72 bits of write data, two more 24-bit commands, or one more command plus 36 bits of data to be written to an AMB control register.

The commands correspond to standard DRAM access cycles, such as row select, precharge, and refresh commands. Read and write commands include only column addresses. All commands include a 3-bit FB-DIMM address, allowing up to 8 FB-DIMM modules on a channel.

Because write data is supplied more slowly than DDR memory expects it, writes are buffered in the AMB until they can be written in a burst. Write commands are not directly linked to the write data; instead, each AMB has a write data FIFO that is filled by four consecutive write data frames, and is emptied by a write command.

Both northbound and southbound links can operate at full speed with one bit line disabled, by discarding 12 bits of CRC information per frame.

Note that the bandwidth of an FB-DIMM channel is equal to the peak read bandwidth of a DDR memory channel (and this speed can be sustained, as there is no contention for the northbound channel), plus half of the peak write bandwidth of a DDR memory channel (which can often be sustained, if one command per frame is sufficient). The only overhead is the need for a channel sync frame (which elicits a northbound status frame in response) every 32 to 42 frames (2.5–3% overhead).

Implementations

Intel has adopted the technology for their Xeon 5000/5100 series and beyond, which they consider "a long-term strategic direction for servers".[6]

Sun Microsystems used FB-DIMMs for the Niagara II (UltraSparc T2) server processor.[7]

Intel's enthusiast system platform Skulltrail uses FB-DIMMs for their dual CPU socket, multi-GPU system.[8]

FB-DIMMS have 240 pins and are the same total length as other DDR DIMMs but differ by having indents on both ends within the slot.

The cost of FB-DIMM memory was initially much higher than registered DIMM, which may be one of the factors behind its current level of acceptance. Also, the AMB chip dissipates considerable heat, leading to additional cooling problems. Although strenuous efforts were made to minimize delay in the AMB, there is some noticeable cost in memory access latency.[9][10][11]

History

As of September 2006, AMD has taken FB-DIMM off their roadmap.[12] In December 2006, AMD has revealed in one of the slides that microprocessors based on the new K10 microarchitecture has the support for FB-DIMM "when appropriate".[13] In addition, AMD also developed Socket G3 Memory Extender (G3MX), which uses a single buffer for every 4 modules instead of one for each, to be used by Opteron-based systems in 2009.[14]

At the 2007 Intel Developer Forum, it was revealed that major memory manufacturers have no plans to extend FB-DIMM to support DDR3 SDRAM. Instead, only registered DIMM for DDR3 SDRAM had been demonstrated.[15]

In 2007, Intel demonstrated FB-DIMM with shorter latencies, CL5 and CL3, showing improvement in latencies.[16]

On August 5, 2008, Elpida Memory announced that it would mass-produce the world's first FB-DIMM at 16 Gigabyte capacity, as from Q4 2008,[17] however (As of January 2011) the product has not appeared and the press release has been deleted from Elpida's site.[18]

See also

References

  1. Rami Marwan Nasr (2005). FBSim and the Fully Buffered DIMM memory system architecture. University of Maryland, College Park. http://www.ece.umd.edu/~blj/papers/thesis-MS-nasr--FBDIMM.pdf. Retrieved 2007-03-13. 
  2. Brinda Ganesh; Aamer Jaleel; David Wang; Bruce Jacob (February 2007). Fully-Buffered DIMM Memory Architectures: Understanding Mechanisms, Overheads and Scaling. Proc. 13th International Symposium on High Performance Computer Architecture (HPCA 2007). http://www.ece.umd.edu/~blj/papers/hpca2007.pdf. Retrieved 2007-03-13. 
  3. Dima Kukushkin. "Intel 5000 series: Dual Processor Chipsets for Servers and Workstations". Intel Corporation. http://idfemea.intel.com/2006/prague/download/MA_MATS004.pdf. Retrieved 2007-03-13. 
  4. "DDR2 Fully Buffered DIMM". Samsung Electronics. http://www.samsung.com/Products/Semiconductor/DDR_DDR2/DDR2SDRAM/Module/FBDIMM/M395T2953CZ4/ds_512mb_c_die_based_fbdimm_rev13.pdf. Retrieved 2007-03-13. 
  5. "TN-47-21 FBDIMM – Channel Utilization (Bandwidth and Power)". Micron Technology. 2006. Archived from the original on 2007-09-27. https://web.archive.org/web/20070927174559/http://download.micron.com/pdf/technotes/ddr2/tn4721.pdf. Retrieved 2007-03-13. 
  6. Intel server platform page
  7. Microprocessor Report: "Niagara 2 Opens the Floodgates", Harlan McGhan
  8. Intel Skulltrail Unleashed: Core 2 Extreme QX9775 x 2 - HotHardware
  9. Charlie Demerjian (2004-04-06). "There's magic in the Intel FB-DIMM old buffer". The Inquirer. http://www.theinquirer.net/default.aspx?article=15189. 
  10. Anand Lal Shimpi (2006-08-09). "Apple's Mac Pro: A Discussion of Specifications". http://www.anandtech.com/printarticle.aspx?i=2811. Retrieved 2007-03-13. 
  11. Anand Lal Shimpi (2006-08-16). "Apple's Mac Pro - A True PowerMac Successor". http://www.anandtech.com/printarticle.aspx?i=2816. Retrieved 2007-03-13. 
  12. "The Inquirer report". The Inquirer. http://www.theinquirer.net/default.aspx?article=34412. 
  13. (slide 5) Slides AMD Analyst Day 2006, December 14, 2006
  14. Adrian Offerman (2007-07-25). "AMD will double memory of Opteron processors". http://www.chiplist.com/AMD_will_double_memory_of_Opteron_processors/tree3f-article--45-/. Retrieved 2007-10-01. 
  15. Theo Valich (2007-09-26). "FB-DIMM is dead, RDDR3 is new king". http://www.theinquirer.net/inquirer/news/1014319/fb-dimm-dead-rddr3-king. Retrieved 2016-07-11. 
  16. Rick C. Hodgin (2007-10-31). "Intel's Skulltrail enthusiast platform running at 5.0 GHz". Archived from the original on 2012-04-18. https://web.archive.org/web/20120418224543/http://www.tgdaily.com/hardware-features/34636-intels-skulltrail-enthusiast-platform-running-at-50-ghz. Retrieved 2007-10-31. 
  17. All Bow Down Before the Mighty 16GB FB-DIMM!
  18. "New Room 2008 | Elpida Memory, Inc.". http://www.elpida.com/en/news/2008/index.html. 

External links