(Add fiji to archs using HBM) |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 4: | Line 4: | ||
== Motivation == | == Motivation == | ||
{{see also|memory wall}} | {{see also|memory wall}} | ||
− | The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as [[graphics processor|high-end graphics]] and networking (e.g., 100G+ [[Ethernet]], TB+ [[silicon photonics]]), and [[high-performance computing]]. Unfortunately over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a [[step function]] improvement in memory bandwidth. | + | The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as [[graphics processor|high-end graphics]] and networking (e.g., 100G+ [[Ethernet]], TB+ [[silicon photonics]]), and [[high-performance computing]]. Unfortunately, over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a [[step function]] improvement in memory bandwidth. |
== Overview == | == Overview == | ||
High-bandwidth memory leverages [[through-silicon vias]] (TSVs) to overcome some of the limitations found in traditional [[memory interfaces]] such as [[DDR3]] and [[DDR4]]. Generally speaking, HBM allows for higher capacity memory by stacking the dies tightly on top of each other thereby also achieving smaller form factors that are not possible using prior solutions such as [[DIMMs]]. The use of TSV also allows for higher power efficiency at the system level. | High-bandwidth memory leverages [[through-silicon vias]] (TSVs) to overcome some of the limitations found in traditional [[memory interfaces]] such as [[DDR3]] and [[DDR4]]. Generally speaking, HBM allows for higher capacity memory by stacking the dies tightly on top of each other thereby also achieving smaller form factors that are not possible using prior solutions such as [[DIMMs]]. The use of TSV also allows for higher power efficiency at the system level. | ||
+ | == Utilizing products == | ||
+ | {{collist | ||
+ | | count = 2 | ||
+ | | | ||
+ | * AMD | ||
+ | ** {{amd|Fiji|l=arch}} | ||
+ | ** {{amd|Polaris|l=arch}} | ||
+ | ** {{amd|Vega|l=arch}} | ||
+ | ** {{amd|Navi|l=arch}} | ||
+ | * Google | ||
+ | ** {{google|TPU v3}} | ||
+ | * Intel | ||
+ | ** {{intel|Kaby Lake G|l=core}} | ||
+ | ** {{nervana|Lake Crest|l=arch}} | ||
+ | ** {{nervana|Spring Crest|l=arch}} | ||
+ | ** Altera FPGAs | ||
+ | * NEC | ||
+ | ** {{nec|SX-Aurora|l=arch}} | ||
+ | * Nvidia | ||
+ | ** {{nvidia|Pascal|l=arch}} | ||
+ | ** {{nvidia|Volta|l=arch}} | ||
+ | ** {{nvidia|Turing|l=arch}} | ||
+ | ** {{nvidia|Ampere|l=arch}} | ||
+ | * Xilinx | ||
+ | ** FPGAs | ||
+ | }} | ||
+ | {{expand list}} | ||
+ | |||
+ | == See also == | ||
+ | * [[Hybrid Memory Cube]] (HMC) | ||
[[Category:memory interfaces]] | [[Category:memory interfaces]] |
Latest revision as of 23:23, 8 January 2021
High-Bandwidth Memory (HBM) is a memory interface technology that exploits the large number of signals available through die stacking technologies in order to achieve very high peak bandwidth.
Motivation[edit]
- See also: memory wall
The development of HBM rose from the need for considerably higher bandwidth and higher memory density. Drivers for the technology are high-performance applications such as high-end graphics and networking (e.g., 100G+ Ethernet, TB+ silicon photonics), and high-performance computing. Unfortunately, over the last few decades, memory bandwidth increased at a much slower rate when compared to computing power, widening a bottleneck gap that was already large. HBM was designed to introduce a step function improvement in memory bandwidth.
Overview[edit]
High-bandwidth memory leverages through-silicon vias (TSVs) to overcome some of the limitations found in traditional memory interfaces such as DDR3 and DDR4. Generally speaking, HBM allows for higher capacity memory by stacking the dies tightly on top of each other thereby also achieving smaller form factors that are not possible using prior solutions such as DIMMs. The use of TSV also allows for higher power efficiency at the system level.
Utilizing products[edit]
- AMD
- Intel
- Kaby Lake G
- Lake Crest
- Spring Crest
- Altera FPGAs
- NEC
- Nvidia
- Xilinx
- FPGAs
This list is incomplete; you can help by expanding it.
See also[edit]
- Hybrid Memory Cube (HMC)