From WikiChip
Difference between revisions of "intel/dl boost"
< intel

Line 1: Line 1:
 
{{intel title|DL Boost}}
 
{{intel title|DL Boost}}
'''DL Boost''' ('''deep learning boost''') is a name used by [[Intel]] to encompass a number of [[x86]] technologies designed to [[accelerate]] AI workloads, specifically for the acceleration of [[inference]].
+
'''DL Boost''' ('''deep learning boost''') is a name used by [[Intel]] to describe a set of [[x86]] technologies designed for the [[acceleration]] of AI workloads, including both inference and training.
  
 
== Overview ==
 
== Overview ==
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features:
+
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}} but has since been extended further with more capabilities in newer microarchitectures.
  
* {{x86|AVX512_VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512_VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
+
DL Boost includes the following features:
* {{x86|AVX512_BF16|AVX-512 BFloat16 Instructions}} (AVX512_BF16), first introduced with {{intel|Cooper Lake|l=arch}}. This extension offers instructions for converting to [[bfloat16]] and then performing multiply-accumulate on two bfloat16 values.
+
 
 +
* {{x86|AVX512_VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512_VNNI) - an instruction set extension that introduces reduced-precision (8-bit and 16-bit) multiply-accumulate for the acceleration of inference. VNNI was first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
 +
* {{x86|AVX512_BF16|AVX-512 BFloat16 Instructions}} (AVX512_BF16) - an instruction set extension for converting to [[bfloat16]] and then performing multiply-accumulate on such values for the acceleration of both inference and training. BG16 was first introduced with {{intel|Cooper Lake|l=arch}}.
  
 
== Implementations ==
 
== Implementations ==

Revision as of 23:36, 24 June 2020

DL Boost (deep learning boost) is a name used by Intel to describe a set of x86 technologies designed for the acceleration of AI workloads, including both inference and training.

Overview

DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake but has since been extended further with more capabilities in newer microarchitectures.

DL Boost includes the following features:

Implementations

Microarchitecture AVX512_VNNI AVX512_BF16
Client
Ice Lake (Client)
Server
Cascade Lake
Cooper Lake
Ice Lake (Server)
Sapphire Rapids
Granite Rapids TBD TBD

See also