From WikiChip
Editing nervana/microarchitectures/spring crest

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 4: Line 4:
 
|name=Spring Crest
 
|name=Spring Crest
 
|designer=Nervana
 
|designer=Nervana
|designer 2=Intel
 
 
|manufacturer=Intel
 
|manufacturer=Intel
 
|introduction=2019
 
|introduction=2019
Line 12: Line 11:
 
|predecessor link=nervana/microarchitectures/lake crest
 
|predecessor link=nervana/microarchitectures/lake crest
 
}}
 
}}
'''Spring Crest''' ('''SCR''') is the successor to {{\\|Lake Crest}}, a training [[neural processor]] microarchitecture designed by [[Intel Nervana]] for the data center and workstations. With the acquisition of [[Habana Labs]], Spring Crest was discontinued.
+
'''Spring Crest''' ('''SCR''') is the successor to {{\\|Lake Crest}}, a planned [[neural processor]] microarchitecture designed by [[Intel Nervana]].
  
 
Products based on Spring Crest are branded as the {{nervana|NNP|NNP L-1000 series}}.
 
Products based on Spring Crest are branded as the {{nervana|NNP|NNP L-1000 series}}.
Line 62: Line 61:
  
 
=== Memory Subsystem ===
 
=== Memory Subsystem ===
The memory subsystem is in charge of sourcing and sinking the data with the compute blocks as well as the routing mesh. Each TPC contains 2.5 MiB of local scratchpad memory. With a total of 24 TPCs, there is 60 MiB of scratchpad memory in total on-die. The memory is highly-banked and multi-ported, designed for simultaneous read and write accesses. As part of the memory ports, there is native tensor transpose support. In other words, tensor transpose can be done directly by simply reading and writing into memory without any additional overhead. There is a total of 1.4 Tbps of bandwidth between the compute blocks and the scratchpad memory banks.
+
The memory subsystem is in charge of sourcing and sinking the data with the compute blocks as well as the routing mesh. Each TPC contains 2.5 MiB of local scratchpad memory. With a total of 24 TPCs, there is 60 MiB of scratchpad memory in total on-die. The memory is highly-banked, designed for simultaneous read and write accesses. As part of the memory ports, there is native tensor transpose support. In other words, tensor transpose can be done directly by simply reading and writing into memory without any additional overhead. There is a total of 1.4 Tbps of bandwidth between the compute blocks and the scratchpad memory banks.
  
 
Memory is explicitly managed by the software to optimize [[data locality]] and [[data residency]] - this applies to both the on-die memory and the off-die HBM memory. Hardware management of memory has been kept to a minimum in order to not interfere with software optimizations. Message passing, memory allocation, and memory management are all under software control. The software can also directly transfer memory between TPC as well as HBM and memory banks.  
 
Memory is explicitly managed by the software to optimize [[data locality]] and [[data residency]] - this applies to both the on-die memory and the off-die HBM memory. Hardware management of memory has been kept to a minimum in order to not interfere with software optimizations. Message passing, memory allocation, and memory management are all under software control. The software can also directly transfer memory between TPC as well as HBM and memory banks.  
Line 72: Line 71:
  
 
== Network-on-Chip (NoC) ==
 
== Network-on-Chip (NoC) ==
[[File:spring crest pod block diagram.svg|right|400px]]
+
Spring Crest integrates a 2-dimensional mesh architecture. The chip has four pods, one at each of the quadrant. Each pod includes six TPCs and is directly linked to the nearest HBM. There are a total of three full-speed bidirectional meshes - for the HBM, external InterChip interconnects, and neighboring pods. The different busses are designed to reduce interference between the different types of traffic. There is a total of 1.3 TB/s of bandwidth in each direction for a total of 2.6 TB/s of cross-sectional bandwidth on the network.
Spring Crest integrates a 2-dimensional mesh architecture. The chip has four pods, one at each of the quadrant. Pods are there to localized data movement and reuse. Each pod includes six TPCs and is directly linked to the nearest physical HBM. There are a total of three full-speed bidirectional meshes - for the HBM, external InterChip interconnects, and neighboring pods. The different busses are designed to reduce interference between the different types of traffic. There is a total of 1.3 TB/s of bandwidth in each direction for a total of 2.6 TB/s of cross-sectional bandwidth on the network.
 
 
 
:[[File:spring crest mesh.svg|650px]]
 
  
 
=== Pod ===
 
=== Pod ===
Within each pod are six TPCs. Each pod is connected to its own [[HBM]] stack, to the external InterChip interconnects (ICLs), and to neighboring pods. Each TPC incorporates a [[crossbar router]] going in all three compass directions (North, South, East, and West). The three buses extend from each TPC router in all four directions. There are multiple connections between the meshes and the HBM interface as well as multiple connections to the InterChip Links. In order to exploit parallelism, those buses operate simultaneously and at full speed in order to facilitate many tasks at the same time. For example, some pods may be transferring data to other pods, while some TPCs may be operating on the previous results of another TPC, while another TPC is writing the data back to memory, and another TPC reading memory back on-chip. Software scheduling plays a big role in optimizing network traffic for higher utilization.
+
Within each pod are six TPCs. Each pod is connected to its own HBM, to the external InterChip interconnects (ICLs), and to neighboring pods. Each TPC incorporates a crossbar router going in all three compass directions (North, South, East, and West). The three buses extend from each TPC router in all four directions. There are multiple connections between the meshes and the HBM interface as well as multiple connections to the InterChip Links. In order to exploit parallelism, those buses operate simultaneously and at full speed in order to facilitate many tasks at the same time. For example, some pods may be transferring data to other pods, while some TPCs may be operating on the previous results of another TPC, while another TPC is writing the data back to memory, and another TPC reading memory back on-chip. Software scheduling plays a big role in optimizing network traffic for higher utilization.
  
 
== Scalability ==
 
== Scalability ==
Spring Crest supports [[scale-out]] support. Topologies of up to 1024 nodes are supported. Chips are connected directly to each other via the InterChip Links (ICLs), a custom-designed low latency, low overhead, reliable transmission links.
+
{{empty section}}
  
 
=== InterChip Link (ICL) ===
 
=== InterChip Link (ICL) ===
 
SerDes lanes on Spring Crest are grouped into quads. Each quad operates at 28 GT/s for a total bandwidth of 28 GB/s. Spring Crest features four InterChip Link (ICL) ports. Each ICL comprises four quads for a peak bandwidth of 112 GB/s. In total, with all four ICL ports, Spring Crest has a peak aggregated bandwidth of 448 GB/s (3.584 Tb/s).
 
SerDes lanes on Spring Crest are grouped into quads. Each quad operates at 28 GT/s for a total bandwidth of 28 GB/s. Spring Crest features four InterChip Link (ICL) ports. Each ICL comprises four quads for a peak bandwidth of 112 GB/s. In total, with all four ICL ports, Spring Crest has a peak aggregated bandwidth of 448 GB/s (3.584 Tb/s).
 
The ICL links come with a fully programmable router built-in. It is designed for glue-less connections in various topologies, including a [[ring topology]], a [[fully connected topology]], and a [[hybrid cube mesh topology]]. Other topologies are also possible. There is support for virtual channels and priorities for traffic management and complex topologies as well as to avoid deadlocks. There is also built-in support for direct transfer to local memory.
 
  
 
== Package ==
 
== Package ==
[[File:intel nnp-l chip.png|right|thumb]]
 
 
* 60 mm x 60 mm package
 
* 60 mm x 60 mm package
 
** 6-2-6 layer stackup
 
** 6-2-6 layer stackup
Line 96: Line 89:
 
* 1200 mm² [[CoWoS]]
 
* 1200 mm² [[CoWoS]]
  
{| class="wikitable"
+
 
|-
+
:[[File:intel nnp-l chip.png|500px]]
! Front !! Back
 
|-
 
| [[File:spring crest package (front).png|350px]] || [[File:spring crest package (back).png|350px]]
 
|}
 
  
 
=== PCIe ===
 
=== PCIe ===
 
* TDP: 300 W (max)
 
 
 
=== OAM ===
 
=== OAM ===
Spring Crest is available as a mezzanine board based on the [[OCP Accelerator Module]] (OAM) design specification. The Spring Crest NNP-T OAM module comes with 16 ICL SerDes ports, with each port being x4 lanes. The OAM module defines up to 7 x16 SerDes ports in order to support multiple interconnect topologies. In the NNP case, the 16 x4 SerDes are combined into 4 x16 (4×4) SerDes. In the OAM module, this corresponds with SerDes 1, 3, 4, and 6 in the specs. SerDes 2, 5, and R are [[not connected]].
 
 
* TDP: 375 W (max)
 
 
 
<gallery heights=300px widths=625px>
 
spring crest mezzanine card (front).png
 
spring crest ocp board (front).png
 
spring crest ocp board (back).png
 
</gallery>
 
  
 
== Die ==
 
== Die ==
Line 126: Line 102:
  
  
:[[File:spring crest floorplan.png|class=wikichip_ogimage|600px]]
+
:[[File:spring crest floorplan.png|600px]]
  
  
 
:[[File:spring crest floorplan (annotated).png|600px]]
 
:[[File:spring crest floorplan (annotated).png|600px]]
 
== See also ==
 
* Intel {{intel|Spring Hill|l=arch}}
 
 
== Bibliography ==
 
* {{bib|hc|31|Intel}}
 
* {{bib|linley|Spring 2019|Intel}}
 

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)
codenameSpring Crest +
designerNervana + and Intel +
first launched2019 +
full page namenervana/microarchitectures/spring crest +
instance ofmicroarchitecture +
manufacturerIntel +
nameSpring Crest +
process16 nm (0.016 μm, 1.6e-5 mm) +
processing element count24 +