From WikiChip
Difference between revisions of "intel/dl boost"
< intel

(Overview)
Line 5: Line 5:
 
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features:
 
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features:
  
* {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
+
* {{x86|AVX512_VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512_VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
* [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}}
+
* {{x86|AVX512_BF16|AVX-512 BFloat16 Instructions}} (AVX512_BF16), first introduced with {{intel|Cooper Lake|l=arch}}. This extension offers instructions for converting to [[bfloat16]] and then performing multiply-accumulate on two bfloat16 values.
  
 +
== Implementations ==
 +
{| class="wikitable"
 +
|-
 +
! Microarchitecture !! {{x86|AVX512_VNNI}} !! {{x86|AVX512_BF16}}
 +
|-
 +
! colspan="3" | Client
 +
|-
 +
| {{intel|Ice Lake (Client)|l=arch}} || {{tchk|yes}} || {{tchk|no}}
 +
|-
 +
! colspan="3" | Server
 +
|-
 +
| {{intel|Cascade Lake|l=arch}} || {{tchk|yes}} || {{tchk|no}}
 +
|-
 +
| {{intel|Cooper Lake|l=arch}} || {{tchk|yes}} || {{tchk|yes}}
 +
|-
 +
| {{intel|Ice Lake (Server)|l=arch}} || {{tchk|yes}} || {{tchk|no}}
 +
|-
 +
| {{intel|Sapphire Rapids|l=arch}} || {{tchk|yes}} || {{tchk|yes}}
 +
|-
 +
| {{intel|Granite Rapids|l=arch}} || {{tchk|some|TBD}} || {{tchk|some|TBD}}
 +
|}
 +
 +
== See also ==
 +
* [[bfloat16]]
 +
* [[Neural processors]]
 +
* [[Acceleration]]
  
 
[[category:intel]]
 
[[category:intel]]

Revision as of 23:42, 21 June 2020

DL Boost (deep learning boost) is a name used by Intel to encompass a number of x86 technologies designed to accelerate AI workloads, specifically for the acceleration of inference.

Overview

DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake. DL Boost includes the following features:

Implementations

Microarchitecture AVX512_VNNI AVX512_BF16
Client
Ice Lake (Client)
Server
Cascade Lake
Cooper Lake
Ice Lake (Server)
Sapphire Rapids
Granite Rapids TBD TBD

See also