From WikiChip
High-Bandwidth Memory (HBM)
Revision as of 00:55, 3 January 2018 by Inject (talk | contribs) (hbm initial article)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

High-Bandwidth Memory (HBM) is a memory interface technology that exploits the large number of signals available through die stacking technologies in order to achieve very high peak bandwidth.

Motivation

See also: memory wall

The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as high-end graphics and networking (e.g., 100G+ Ethernet, TB+ silicon photonics), and high-performance computing. Unfortunately over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a step function improvement in memory bandwidth.