From WikiChip
Editing nervana/microarchitectures/spring crest
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 12: | Line 12: | ||
|predecessor link=nervana/microarchitectures/lake crest | |predecessor link=nervana/microarchitectures/lake crest | ||
}} | }} | ||
− | '''Spring Crest''' ('''SCR''') is the successor to {{\\|Lake Crest}}, a | + | '''Spring Crest''' ('''SCR''') is the successor to {{\\|Lake Crest}}, a planned [[neural processor]] microarchitecture designed by [[Intel Nervana]]. |
Products based on Spring Crest are branded as the {{nervana|NNP|NNP L-1000 series}}. | Products based on Spring Crest are branded as the {{nervana|NNP|NNP L-1000 series}}. | ||
Line 62: | Line 62: | ||
=== Memory Subsystem === | === Memory Subsystem === | ||
− | The memory subsystem is in charge of sourcing and sinking the data with the compute blocks as well as the routing mesh. Each TPC contains 2.5 MiB of local scratchpad memory. With a total of 24 TPCs, there is 60 MiB of scratchpad memory in total on-die. The memory is highly-banked | + | The memory subsystem is in charge of sourcing and sinking the data with the compute blocks as well as the routing mesh. Each TPC contains 2.5 MiB of local scratchpad memory. With a total of 24 TPCs, there is 60 MiB of scratchpad memory in total on-die. The memory is highly-banked, designed for simultaneous read and write accesses. As part of the memory ports, there is native tensor transpose support. In other words, tensor transpose can be done directly by simply reading and writing into memory without any additional overhead. There is a total of 1.4 Tbps of bandwidth between the compute blocks and the scratchpad memory banks. |
Memory is explicitly managed by the software to optimize [[data locality]] and [[data residency]] - this applies to both the on-die memory and the off-die HBM memory. Hardware management of memory has been kept to a minimum in order to not interfere with software optimizations. Message passing, memory allocation, and memory management are all under software control. The software can also directly transfer memory between TPC as well as HBM and memory banks. | Memory is explicitly managed by the software to optimize [[data locality]] and [[data residency]] - this applies to both the on-die memory and the off-die HBM memory. Hardware management of memory has been kept to a minimum in order to not interfere with software optimizations. Message passing, memory allocation, and memory management are all under software control. The software can also directly transfer memory between TPC as well as HBM and memory banks. | ||
Line 72: | Line 72: | ||
== Network-on-Chip (NoC) == | == Network-on-Chip (NoC) == | ||
− | + | Spring Crest integrates a 2-dimensional mesh architecture. The chip has four pods, one at each of the quadrant. Each pod includes six TPCs and is directly linked to the nearest HBM. There are a total of three full-speed bidirectional meshes - for the HBM, external InterChip interconnects, and neighboring pods. The different busses are designed to reduce interference between the different types of traffic. There is a total of 1.3 TB/s of bandwidth in each direction for a total of 2.6 TB/s of cross-sectional bandwidth on the network. | |
− | Spring Crest integrates a 2-dimensional mesh architecture. The chip has four pods, one at each of the quadrant | ||
− | |||
− | |||
=== Pod === | === Pod === | ||
− | Within each pod are six TPCs. Each pod is connected to its own | + | Within each pod are six TPCs. Each pod is connected to its own HBM, to the external InterChip interconnects (ICLs), and to neighboring pods. Each TPC incorporates a crossbar router going in all three compass directions (North, South, East, and West). The three buses extend from each TPC router in all four directions. There are multiple connections between the meshes and the HBM interface as well as multiple connections to the InterChip Links. In order to exploit parallelism, those buses operate simultaneously and at full speed in order to facilitate many tasks at the same time. For example, some pods may be transferring data to other pods, while some TPCs may be operating on the previous results of another TPC, while another TPC is writing the data back to memory, and another TPC reading memory back on-chip. Software scheduling plays a big role in optimizing network traffic for higher utilization. |
== Scalability == | == Scalability == | ||
− | + | {{empty section}} | |
=== InterChip Link (ICL) === | === InterChip Link (ICL) === | ||
SerDes lanes on Spring Crest are grouped into quads. Each quad operates at 28 GT/s for a total bandwidth of 28 GB/s. Spring Crest features four InterChip Link (ICL) ports. Each ICL comprises four quads for a peak bandwidth of 112 GB/s. In total, with all four ICL ports, Spring Crest has a peak aggregated bandwidth of 448 GB/s (3.584 Tb/s). | SerDes lanes on Spring Crest are grouped into quads. Each quad operates at 28 GT/s for a total bandwidth of 28 GB/s. Spring Crest features four InterChip Link (ICL) ports. Each ICL comprises four quads for a peak bandwidth of 112 GB/s. In total, with all four ICL ports, Spring Crest has a peak aggregated bandwidth of 448 GB/s (3.584 Tb/s). | ||
− | |||
− | |||
== Package == | == Package == | ||
− | |||
* 60 mm x 60 mm package | * 60 mm x 60 mm package | ||
** 6-2-6 layer stackup | ** 6-2-6 layer stackup | ||
Line 96: | Line 90: | ||
* 1200 mm² [[CoWoS]] | * 1200 mm² [[CoWoS]] | ||
− | + | ||
− | + | :[[File:intel nnp-l chip.png|500px]] | |
− | |||
− | |||
− | |||
− | |||
=== PCIe === | === PCIe === | ||
− | |||
− | |||
− | |||
=== OAM === | === OAM === | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Die == | == Die == | ||
Line 126: | Line 103: | ||
− | :[[File:spring crest floorplan.png | + | :[[File:spring crest floorplan.png|600px]] |
Line 133: | Line 110: | ||
== See also == | == See also == | ||
* Intel {{intel|Spring Hill|l=arch}} | * Intel {{intel|Spring Hill|l=arch}} | ||
− | |||
− | |||
− | |||
− |
Facts about "Spring Crest - Microarchitectures - Intel Nervana"
codename | Spring Crest + |
designer | Nervana + and Intel + |
first launched | 2019 + |
full page name | nervana/microarchitectures/spring crest + |
instance of | microarchitecture + |
manufacturer | Intel + |
name | Spring Crest + |
process | 16 nm (0.016 μm, 1.6e-5 mm) + |
processing element count | 24 + |