From WikiChip
Difference between revisions of "intel/dl boost"
(→Overview) |
|||
Line 6: | Line 6: | ||
* {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client) | * {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client) | ||
− | * [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}} | + | * [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}} |
[[category:intel]] | [[category:intel]] |
Revision as of 00:13, 4 October 2019
DL Boost (deep learning boost) is a name used by Intel to encompass a number of x86 technologies designed to accelerate AI workloads, specifically for the acceleration of inference.
Overview
DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake. DL Boost includes the following features:
- AVX-512 Vector Neural Network Instructions (AVX512VNNI), first introduced with Cascade Lake (server) and Ice Lake (Client)
- Brain floating-point format (bfloat16), first introduced with Cooper Lake