From WikiChip
Difference between revisions of "intel/microarchitectures/spring hill"
< intel‎ | microarchitectures

(Board)
(Memory Organization)
Line 35: Line 35:
  
 
=== Memory Organization ===
 
=== Memory Organization ===
{{empty section}}
+
* LLC
 +
** 24 MiB
 +
** 3 MiB/slice (8 slices in total)
 +
* DRAM
 +
** 64 GiB
 +
** 2x64b or 4x32b LPDDR4x-4200
 +
** 67.2 GB/s
  
 
== Overview ==
 
== Overview ==

Revision as of 19:24, 15 October 2019

Edit Values
Spring Hill µarch
General Info
Arch TypeNPU
DesignerIntel
ManufacturerIntel
IntroductionMay, 2019
Process10 nm
Core Configs2
PE Configs8, 10, 12
Cache
L3 Cache3 MiB/Slice

Spring Hill is a 10 nm microarchitecture designed by Intel for their inference neural processors. Spring Hill was developed by the Israel Haifa Development Center (IDC).

Spring Hill-based products are branded as the NNP-I 1000 series.

Process technology

Spring Hill NPUs are fabricated on Intel's 10 nm process.

Architecture

New text document.svg This section is empty; you can help add the missing info by editing this page.

Block Diagram

SoC Overview

New text document.svg This section is empty; you can help add the missing info by editing this page.

Sunny Cove Core

See Sunny Cove § Block diagram.

Inference Engine (ICE)

New text document.svg This section is empty; you can help add the missing info by editing this page.

Memory Organization

  • LLC
    • 24 MiB
    • 3 MiB/slice (8 slices in total)
  • DRAM
    • 64 GiB
    • 2x64b or 4x32b LPDDR4x-4200
    • 67.2 GB/s

Overview

spring hill overview.svg

Spring Hill is Intel's first-generation SoC microarchitecture for neural processors designed for the acceleration of inference in the data center. The design targets data center inference workloads with a performance-power efficiency of close to 5 TOPS/W (4.8 in practice) in a power envelope of 10-50 W in order to main a light PCIe-driven accelerator card form factor such as M.2. The form factor and power envelope is selected for its ease of integration into existing infrastructure without additional cooling/power capacity.

Spring Hill borrows a lot from the client Ice Lake SoC. To that end, Spring Hill features two full-fledge Sunny Cove big cores. The primary purpose of the big cores here is to execute the orchestration software and runtime logic determined by the compiler ahead of time. Additionally, since they come with AVX-512 along with the AVX VNNI extension for inference acceleration, they can be used to run any desired user-specified code, providing an additional layer of programmability. Instead of the traditional integrated graphics and additional cores, Intel integrated up to twelve custom inference and compute engines attached to the ring bus in pairs. The ICEs have been designed for inference workloads (see § Inference and Compute Engine (ICE)). The ICEs may each be running independent inference workloads or they may be combined to handle larger models faster. Attached to each pair of ICEs and the SNC cores are 3 MiB slices of last level cache for a total of 24 MiB of on-die shared LLC cache. While the LLC is hardware managed, there is some software provisions that can be used to hint the hardware in terms of expectations by dictating service levels and priorities.

In order to simplify the ICE-ICE, ICE-SNC, and even ICE-Host communication, Spring Hill incorporates a special synchronization unit that allows for efficient communication between the units.

Spring Hill borrows a number of other components from Ice Lake including the FIVR and the power management controller which allows the ICEs and SNC to dynamically shift the power to the various execution units depending on the available thermal headroom and the total package power consumption. Various power-related scheduling is also done ahead of time by the compiler. Feeding Spring Hill is a also an LPDDR4x memory controller that supports either dual-channel 64-bit or quad-channel 32-bit (128b in total) with rates up to 4200 MT/s for a total memory bandwidth of 67.2 GB/s.

Inference Engine (ICE)

New text document.svg This section is empty; you can help add the missing info by editing this page.

Board

spring hill board.JPG

M.2 board:


sph board.jpg

Die

Bibliography

  • Intel, IEEE Hot Chips 30 Symposium (HCS) 2018.

See also

codenameSpring Hill +
core count2 +
designerIntel +
first launchedMay 2019 +
full page nameintel/microarchitectures/spring hill +
instance ofmicroarchitecture +
manufacturerIntel +
nameSpring Hill +
process10 nm (0.01 μm, 1.0e-5 mm) +
processing element count8 +, 10 + and 12 +