From WikiChip
High-Bandwidth Memory (HBM)
(Redirected from HBM)

High-Bandwidth Memory (HBM) is a memory interface technology that exploits the large number of signals available through die stacking technologies in order to achieve very high peak bandwidth.

Motivation[edit]

See also: memory wall

The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as high-end graphics and networking (e.g., 100G+ Ethernet, TB+ silicon photonics), and high-performance computing. Unfortunately, over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a step function improvement in memory bandwidth.

Overview[edit]

High-bandwidth memory leverages through-silicon vias (TSVs) to overcome some of the limitations found in traditional memory interfaces such as DDR3 and DDR4. Generally speaking, HBM allows for higher capacity memory by stacking the dies tightly on top of each other thereby also achieving smaller form factors that are not possible using prior solutions such as DIMMs. The use of TSV also allows for higher power efficiency at the system level.

Utilizing products[edit]

This list is incomplete; you can help by expanding it.

See also[edit]