From WikiChip
Difference between revisions of "high-bandwidth memory"

(Utilizing products)
(Add fiji to archs using HBM)
 
(One intermediate revision by one other user not shown)
Line 14: Line 14:
 
|
 
|
 
* AMD
 
* AMD
 +
** {{amd|Fiji|l=arch}}
 
** {{amd|Polaris|l=arch}}
 
** {{amd|Polaris|l=arch}}
 
** {{amd|Vega|l=arch}}
 
** {{amd|Vega|l=arch}}
 +
** {{amd|Navi|l=arch}}
 
* Google
 
* Google
 
** {{google|TPU v3}}
 
** {{google|TPU v3}}

Latest revision as of 23:23, 8 January 2021

High-Bandwidth Memory (HBM) is a memory interface technology that exploits the large number of signals available through die stacking technologies in order to achieve very high peak bandwidth.

Motivation[edit]

See also: memory wall

The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as high-end graphics and networking (e.g., 100G+ Ethernet, TB+ silicon photonics), and high-performance computing. Unfortunately, over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a step function improvement in memory bandwidth.

Overview[edit]

High-bandwidth memory leverages through-silicon vias (TSVs) to overcome some of the limitations found in traditional memory interfaces such as DDR3 and DDR4. Generally speaking, HBM allows for higher capacity memory by stacking the dies tightly on top of each other thereby also achieving smaller form factors that are not possible using prior solutions such as DIMMs. The use of TSV also allows for higher power efficiency at the system level.

Utilizing products[edit]

This list is incomplete; you can help by expanding it.

See also[edit]