From WikiChip
Editing intel/microarchitectures/spring hill

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 1: Line 1:
{{intel title|Spring Hill|arch}}
+
{{intel title|Spring Hill|l=arch}}
 
{{microarchitecture
 
{{microarchitecture
 
|atype=NPU
 
|atype=NPU
Line 14: Line 14:
 
|l3 per=Slice
 
|l3 per=Slice
 
}}
 
}}
'''Spring Hill''' ('''SPH''') is a [[10 nm]] microarchitecture designed by [[Intel]] for their [[inference]] [[neural processors]]. Spring Hill was developed by the Israel Haifa Development Center (IDC).
+
'''Spring Hill''' is a [[10 nm]] microarchitecture designed by [[Intel]] for their [[inference]] [[neural processors]]. Spring Hill was developed by the Israel Haifa Development Center (IDC).
  
 
Spring Hill-based products are branded as the {{nervana|NNP-I}} 1000 series.
 
Spring Hill-based products are branded as the {{nervana|NNP-I}} 1000 series.
 
== Release date ==
 
Spring Hill was formally announced in May 2019. The chip entered production on November 12, 2019.
 
  
 
== Process technology ==
 
== Process technology ==
Line 25: Line 22:
  
 
== Architecture ==
 
== Architecture ==
Spring Hill leverages the existing {{intel|Ice Lake (Client)|Ice Lake|l=arch}} SoC design and makes extensive reuse of IP throughput the chip.
+
{{empty section}}
 
* Uses {{intel|Ice Lake (Client)|Ice Lake|l=arch}} as the basis for the SoC
 
** Leverages two {{intel|Sunny Cove|l=arch}} cores
 
** Leverages the {{intel|Ring Bus}} architecture
 
** Leverages the [[DVFS]] power controller
 
** Leverages the quad-channel 32-bit (128b) LPDDR4x-4200 controller
 
** Leverages the PCIe controller
 
* Incorporates 6 pairs of ICEs
 
** 12 inference and compute units (ICEs)
 
*** 4 MiB Deep SRAM cache
 
*** DL compute grid
 
**** 4K MACs
 
*** Tensilica Vision P6 DSP
 
** 3 MiB cache slice per pair
 
* 10 - 50 W
 
** [[M.2]], [[EDSFF]], [[PCIe]]
 
  
 
=== Block Diagram ===
 
=== Block Diagram ===
 
==== SoC Overview ====
 
==== SoC Overview ====
:[[File:sph soc.svg|700px]]
+
{{empty section}}
  
 
==== Sunny Cove Core ====
 
==== Sunny Cove Core ====
 
See {{intel|sunny cove#Block diagram|Sunny Cove § Block diagram|l=arch}}.
 
See {{intel|sunny cove#Block diagram|Sunny Cove § Block diagram|l=arch}}.
  
==== Inference Compute Engine ====
+
==== Inference Engine (ICE) ====
:[[File:sph ice.svg|700px]]
+
{{empty section}}
  
 
=== Memory Organization ===
 
=== Memory Organization ===
Line 57: Line 38:
 
** 3 MiB
 
** 3 MiB
 
** 256 KiB/ICE (12 ICEs in total)
 
** 256 KiB/ICE (12 ICEs in total)
** ~68 TB/s
 
 
* Deep SRAM
 
* Deep SRAM
 
** 48 MiB
 
** 48 MiB
 
** 4 MiB/ICE (12 ICEs in total)
 
** 4 MiB/ICE (12 ICEs in total)
** ~6.8 TB/s
 
 
* LLC
 
* LLC
 
** 24 MiB
 
** 24 MiB
 
** 3 MiB/slice (8 slices in total)
 
** 3 MiB/slice (8 slices in total)
** ~680 GB/s
 
 
* DRAM
 
* DRAM
 
** 32 GiB
 
** 32 GiB
Line 72: Line 50:
  
 
== Overview ==
 
== Overview ==
[[File:spring hill overview.svg|right|325px]]
+
[[File:spring hill overview.svg|right|450px]]
 
Spring Hill is [[Intel]]'s first-generation SoC [[microarchitecture]] for [[neural processors]] designed for the acceleration of inference in the [[data center]]. The design targets data center inference workloads with a performance-power efficiency of close to 5 TOPS/W (4.8 in practice) in a power envelope of 10-50 W in order to main a light PCIe-driven [[accelerator card]] form factor such as [[M.2]]. The form factor and power envelope is selected for its ease of integration into existing infrastructure without additional cooling/power capacity.
 
Spring Hill is [[Intel]]'s first-generation SoC [[microarchitecture]] for [[neural processors]] designed for the acceleration of inference in the [[data center]]. The design targets data center inference workloads with a performance-power efficiency of close to 5 TOPS/W (4.8 in practice) in a power envelope of 10-50 W in order to main a light PCIe-driven [[accelerator card]] form factor such as [[M.2]]. The form factor and power envelope is selected for its ease of integration into existing infrastructure without additional cooling/power capacity.
  
Line 85: Line 63:
  
 
=== DL compute grid ===
 
=== DL compute grid ===
[[File:sph dl compute grid.svg|right|400px]]
+
The Deep Learning Compute Grid is a large 4D structure designed to provide 4 ways of parallelism. The grid itself is organized as a 1024x1024x4 4D grid capable of performing 4K-MAC/cycle (int8). It supports [[half-precision floating-point]] (FP16) as well as 8-bit, 4-bit, 2-bit, and even 1-bit precision operations natively. The grid is designed such that data movement is minimized by broadcasting the input data across the entire grid at once. Likewise, within the grid, data reuse is maximized by shifting the data left and right as necessary. This is done through compile-time transformations of the network in order to have a better layout in the hardware.
The Deep Learning Compute Grid is a large 4D structure designed to provide 4 ways of parallelism. The grid itself is organized as a 32x32x4 4D grid capable of performing 4K-MAC/cycle (int8). It supports [[half-precision floating-point]] (FP16) as well as 8-bit, 4-bit, 2-bit, and even 1-bit precision operations natively. The grid is designed such that data movement is minimized by broadcasting the input data across the entire grid at once. Likewise, within the grid, data reuse is maximized by shifting the data left and right as necessary. This is done through compile-time transformations of the network in order to have a better layout in the hardware.
 
 
 
The compute grid integrates a post-processing unit with hardware-hardened support for various non-linear operations and pooling. The compute grid is managed by a programmable control unit that can map the models in various ways across the grid. The exact way networks are mapped is pre-determined statically are compile-time. Additionally, the control unit can perform various other memory and processing operations.
 
  
* DL Compute Grid
+
The compute grid integrates a post-processing unit with hardware-hardened support for various non-linear operations and pooling.
** Weights, 1.5 MiB
 
** Input Feature Maps (IFMs), 384 KiB
 
** OSRAM, 3 MiB
 
  
The compute grid is tightly connected to the high-bandwidth 256 KiB TCM which is also connected to the vector processor.
+
The compute grid is managed by a programmable control unit that can map the models in various ways across the grid. The exact way networks are mapped is pre-determined statically are compile-time. Additionally, the control unit can perform various other memory and processing operations.
  
 
=== Programmable vector processor ===
 
=== Programmable vector processor ===
 
For additional flexibility, each ICE comes with a customized [[Cadence]] [[Tensilica Vision P6 DSP]]. This is a 5-slot VLIW 512-bit vector processor configured with two 512-bit vector load ports. It supports FP16 as well as 8-32b integer operations. This DSP was added in order to allow programmable support for operations beyond the ones offered by the compute grid. Intel says that they have customized the DSP with an additional set of custom instructions for the acceleration of various neural network models and various other operations that developers would likely encounter in inference models.
 
For additional flexibility, each ICE comes with a customized [[Cadence]] [[Tensilica Vision P6 DSP]]. This is a 5-slot VLIW 512-bit vector processor configured with two 512-bit vector load ports. It supports FP16 as well as 8-32b integer operations. This DSP was added in order to allow programmable support for operations beyond the ones offered by the compute grid. Intel says that they have customized the DSP with an additional set of custom instructions for the acceleration of various neural network models and various other operations that developers would likely encounter in inference models.
 
== Workload Scalability ==
 
With up to 12 ICEs per chip, depending on the particular nature of the workloads, they may be mapped across a single ICE or multiple. For example, very large models could possibly run across all 12 ICEs in batch 1. Alternatively, smaller workloads may be optimized for throughput instead of latency. In such cases, multiple applications may be mapped across the various ICEs with slightly higher batch sizes.
 
 
<div>
 
<div style="float: left;">
 
'''Latency optimized:'''
 
:2 Applications, Batch size of 1:
 
 
:[[File:sph batch 1x2.svg|400px]]
 
</div>
 
<div style="float: left;">
 
'''Throughput optimized:'''
 
:6 Applications, Batch size of 4:
 
 
:[[File:sph batch 4x6.svg|400px]]
 
</div>
 
</div>
 
{{clear}}
 
 
== Packaging ==
 
{|
 
|-
 
! Front !! Back
 
|-
 
| [[File:spring hill package (front).png|350px]] || [[File:spring hill package (back).png|350px]]
 
|}
 
  
 
== Board ==
 
== Board ==
 
[[File:spring hill board.JPG|right|thumb]]
 
[[File:spring hill board.JPG|right|thumb]]
=== M.2 ===
+
[[M.2]] board:
[[M.2]] board.
 
  
<gallery heights=200px widths=700px>
 
spring hill m.2 (front).png
 
spring hill m.2 (back).png
 
</gallery>
 
=== PCIe ===
 
Spring Hill comes in a PCIe [[accelerator card]] form factor.
 
  
=== EDSFF ===
+
:[[File:sph board.jpg|700px]]
Spring Hill comes in a [[EDSFF]] form factor.
 
  
 
== Die ==
 
== Die ==

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)
codenameSpring Hill +
core count2 +
designerIntel +
first launchedMay 2019 +
full page nameintel/microarchitectures/spring hill +
instance ofmicroarchitecture +
manufacturerIntel +
nameSpring Hill +
process10 nm (0.01 μm, 1.0e-5 mm) +
processing element count10 +, 12 + and 8 +