From WikiChip
Editing intel/microarchitectures/spring hill
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 88: | Line 88: | ||
The Deep Learning Compute Grid is a large 4D structure designed to provide 4 ways of parallelism. The grid itself is organized as a 32x32x4 4D grid capable of performing 4K-MAC/cycle (int8). It supports [[half-precision floating-point]] (FP16) as well as 8-bit, 4-bit, 2-bit, and even 1-bit precision operations natively. The grid is designed such that data movement is minimized by broadcasting the input data across the entire grid at once. Likewise, within the grid, data reuse is maximized by shifting the data left and right as necessary. This is done through compile-time transformations of the network in order to have a better layout in the hardware. | The Deep Learning Compute Grid is a large 4D structure designed to provide 4 ways of parallelism. The grid itself is organized as a 32x32x4 4D grid capable of performing 4K-MAC/cycle (int8). It supports [[half-precision floating-point]] (FP16) as well as 8-bit, 4-bit, 2-bit, and even 1-bit precision operations natively. The grid is designed such that data movement is minimized by broadcasting the input data across the entire grid at once. Likewise, within the grid, data reuse is maximized by shifting the data left and right as necessary. This is done through compile-time transformations of the network in order to have a better layout in the hardware. | ||
− | The compute grid integrates a post-processing unit with hardware-hardened support for various non-linear operations and pooling | + | The compute grid integrates a post-processing unit with hardware-hardened support for various non-linear operations and pooling. |
− | + | The compute grid is managed by a programmable control unit that can map the models in various ways across the grid. The exact way networks are mapped is pre-determined statically are compile-time. Additionally, the control unit can perform various other memory and processing operations. | |
− | |||
− | |||
− | |||
− | |||
− | The compute grid is | ||
=== Programmable vector processor === | === Programmable vector processor === |
Facts about "Spring Hill - Microarchitectures - Intel"
codename | Spring Hill + |
core count | 2 + |
designer | Intel + |
first launched | May 2019 + |
full page name | intel/microarchitectures/spring hill + |
instance of | microarchitecture + |
manufacturer | Intel + |
name | Spring Hill + |
process | 10 nm (0.01 μm, 1.0e-5 mm) + |
processing element count | 10 +, 12 + and 8 + |