From WikiChip
Editing intel/microarchitectures/spring hill
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
− | {{intel title|Spring Hill|arch}} | + | {{intel title|Spring Hill|l=arch}} |
{{microarchitecture | {{microarchitecture | ||
|atype=NPU | |atype=NPU | ||
Line 14: | Line 14: | ||
|l3 per=Slice | |l3 per=Slice | ||
}} | }} | ||
− | '''Spring Hill''' | + | '''Spring Hill''' is a [[10 nm]] microarchitecture designed by [[Intel]] for their [[inference]] [[neural processors]]. Spring Hill was developed by the Israel Haifa Development Center (IDC). |
Spring Hill-based products are branded as the {{nervana|NNP-I}} 1000 series. | Spring Hill-based products are branded as the {{nervana|NNP-I}} 1000 series. | ||
− | |||
− | |||
− | |||
== Process technology == | == Process technology == | ||
Line 25: | Line 22: | ||
== Architecture == | == Architecture == | ||
− | + | {{empty section}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=== Block Diagram === | === Block Diagram === | ||
==== SoC Overview ==== | ==== SoC Overview ==== | ||
− | + | {{empty section}} | |
==== Sunny Cove Core ==== | ==== Sunny Cove Core ==== | ||
See {{intel|sunny cove#Block diagram|Sunny Cove § Block diagram|l=arch}}. | See {{intel|sunny cove#Block diagram|Sunny Cove § Block diagram|l=arch}}. | ||
− | ==== Inference | + | ==== Inference Engine (ICE) ==== |
− | + | {{empty section}} | |
=== Memory Organization === | === Memory Organization === | ||
− | + | {{empty section}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Overview == | == Overview == | ||
− | [[File:spring hill overview.svg|right| | + | [[File:spring hill overview.svg|right|450px]] |
− | Spring Hill is [[Intel]]'s first-generation SoC [[microarchitecture]] for [[neural processors]] designed for the acceleration of inference in the [[data center]]. The design targets data center inference workloads with a performance-power efficiency of close to 5 TOPS/W (4.8 in practice) in a power envelope of 10-50 W in order to main a light PCIe-driven | + | Spring Hill is [[Intel]]'s first-generation SoC [[microarchitecture]] for [[neural processors]] designed for the acceleration of inference in the [[data center]]. The design targets data center inference workloads with a performance-power efficiency of close to 5 TOPS/W (4.8 in practice) in a power envelope of 10-50 W in order to main a light PCIe-driven accelerator card form factor such as [[M.2]]. The form factor and power envelope is selected for its ease of integration into existing infrastructure without additional cooling/power capacity. |
− | Spring Hill borrows a lot from the client {{intel|Ice Lake (Client)|Ice Lake|l=arch}} SoC. To that end, Spring Hill features two full-fledge {{intel|Sunny Cove|l=arch}} [[big cores]]. The primary purpose of the [[big cores]] here is to execute the orchestration software and runtime logic determined by the compiler ahead of time. Additionally, since they come with {{x86|AVX-512}} along with the {{x86|AVX VNNI}} {{x86|extension}} for inference acceleration, they can be used to run any desired user-specified code, providing an additional layer of programmability. Instead of the traditional integrated graphics and additional cores, Intel integrated up to twelve custom inference compute engines attached to the {{intel|ring bus}} in pairs. The ICEs have been designed for inference workloads (see [[#Inference Compute Engine (ICE)|§ Inference Compute Engine (ICE)]]). The ICEs may each be running independent inference workloads or they may be combined to handle larger models faster. Attached to each pair of ICEs and the {{intel|Sunny Cove|SNC|l=arch}} cores are 3 MiB slices of [[last level cache]] for a total of 24 MiB of on-die shared LLC cache. While the LLC is hardware managed, there is some software provisions that can be used to hint the hardware in terms of expectations by dictating service levels and priorities. | + | Spring Hill borrows a lot from the client {{intel|Ice Lake (Client)|Ice Lake|l=arch}} SoC. To that end, Spring Hill features two full-fledge {{intel|Sunny Cove|l=arch}} [[big cores]]. The primary purpose of the [[big cores]] here is to execute the orchestration software and runtime logic determined by the compiler ahead of time. Additionally, since they come with {{x86|AVX-512}} along with the {{x86|AVX VNNI}} {{x86|extension}} for inference acceleration, they can be used to run any desired user-specified code, providing an additional layer of programmability. Instead of the traditional integrated graphics and additional cores, Intel integrated up to twelve custom inference and compute engines attached to the {{intel|ring bus}} in pairs. The ICEs have been designed for inference workloads (see [[#Inference and Compute Engine (ICE)|§ Inference and Compute Engine (ICE)]]). The ICEs may each be running independent inference workloads or they may be combined to handle larger models faster. Attached to each pair of ICEs and the {{intel|Sunny Cove|SNC|l=arch}} cores are 3 MiB slices of [[last level cache]] for a total of 24 MiB of on-die shared LLC cache. While the LLC is hardware managed, there is some software provisions that can be used to hint the hardware in terms of expectations by dictating service levels and priorities. |
In order to simplify the ICE-ICE, ICE-SNC, and even ICE-Host communication, Spring Hill incorporates a special synchronization unit that allows for efficient communication between the units. | In order to simplify the ICE-ICE, ICE-SNC, and even ICE-Host communication, Spring Hill incorporates a special synchronization unit that allows for efficient communication between the units. | ||
Line 81: | Line 47: | ||
Spring Hill borrows a number of other components from {{intel|Ice Lake (client)|Ice Lake|l=arch}} including the [[FIVR]] and the power management controller which allows the ICEs and SNC to dynamically shift the power to the various execution units depending on the available thermal headroom and the total package power consumption. Various power-related scheduling is also done ahead of time by the compiler. Feeding Spring Hill is a also an [[LPDDR4x]] [[memory controller]] that supports either dual-channel 64-bit or quad-channel 32-bit (128b in total) with rates up to 4200 MT/s for a total memory bandwidth of 67.2 GB/s. | Spring Hill borrows a number of other components from {{intel|Ice Lake (client)|Ice Lake|l=arch}} including the [[FIVR]] and the power management controller which allows the ICEs and SNC to dynamically shift the power to the various execution units depending on the available thermal headroom and the total package power consumption. Various power-related scheduling is also done ahead of time by the compiler. Feeding Spring Hill is a also an [[LPDDR4x]] [[memory controller]] that supports either dual-channel 64-bit or quad-channel 32-bit (128b in total) with rates up to 4200 MT/s for a total memory bandwidth of 67.2 GB/s. | ||
− | == Inference | + | == Inference Engine (ICE) == |
− | + | {{empty section}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | {{ | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Board == | == Board == | ||
− | + | {{empty section}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Die == | == Die == |
Facts about "Spring Hill - Microarchitectures - Intel"
codename | Spring Hill + |
core count | 2 + |
designer | Intel + |
first launched | May 2019 + |
full page name | intel/microarchitectures/spring hill + |
instance of | microarchitecture + |
manufacturer | Intel + |
name | Spring Hill + |
process | 10 nm (0.01 μm, 1.0e-5 mm) + |
processing element count | 10 +, 12 + and 8 + |