From WikiChip
Difference between revisions of "nervana/microarchitectures/lake crest"
Line 37: | Line 37: | ||
* 32 GiB on-package [[HBM2]] | * 32 GiB on-package [[HBM2]] | ||
** 1 TiB/s | ** 1 TiB/s | ||
+ | |||
+ | == Additional Shots == | ||
+ | <gallery mode=slideshow> | ||
+ | File:lake crest pcie card internal.png|Lake Crest Accelerator PCIe card (internal view) | ||
+ | </gallery> | ||
+ | |||
+ | == References == | ||
+ | * Intel Nervana AI Day, Naveen Rao, 2017 |
Revision as of 13:24, 5 May 2018
Edit Values | |
Lake Crest µarch | |
General Info | |
Arch Type | NPU |
Designer | Nervana |
Manufacturer | TSMC |
Introduction | November 17, 2016 |
Process | 28 nm |
Succession | |
Lake Crest is a neural processor microarchitecture designed by Nervana.
Contents
[hide]Process Technology
Lake Crest is fabricated on TSMC's 28 nm process.
Architecture
Lake Crest was designed from the ground up for deep learning. The architecture itself is a tensor-based architecture, meaning it's optimized for blocks of compute instead of operating on scalars (as would a traditional Intel CPU would).
- Tensor-based architecture
- Nervana Engine
- Flexpoint number format
- No caches
- Software explicitly manages all on-chip memory
- HBM2 memory
- 32 GiB of in-package memory
- 8 Tbit/s bandwidth
- 12 bi-directional high-bandwidth direct chip-to-chip interconnect
This list is incomplete; you can help by expanding it.
Block Diagram
Chip
Processing Cluster
![]() |
This section is empty; you can help add the missing info by editing this page. |
Memory Hierarchy
- 32 GiB on-package HBM2
- 1 TiB/s
Additional Shots
References
- Intel Nervana AI Day, Naveen Rao, 2017
Facts about "Lake Crest - Microarchitectures - Intel Nervana"
codename | Lake Crest + |
designer | Nervana + |
first launched | November 17, 2016 + |
full page name | nervana/microarchitectures/lake crest + |
instance of | microarchitecture + |
manufacturer | TSMC + |
name | Lake Crest + |
process | 28 nm (0.028 μm, 2.8e-5 mm) + |