From WikiChip
Difference between revisions of "intel/dl boost"
(dl boost) |
|||
Line 1: | Line 1: | ||
{{intel title|DL Boost}} | {{intel title|DL Boost}} | ||
'''DL Boost''' is a marketing term used by [[Intel]] that encompasses a number of [[x86]] technologies designed to [[accelerate]] AI workloads. | '''DL Boost''' is a marketing term used by [[Intel]] that encompasses a number of [[x86]] technologies designed to [[accelerate]] AI workloads. | ||
+ | |||
+ | == Overview == | ||
+ | '''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features: | ||
+ | |||
+ | * {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}} | ||
+ | * [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}} | ||
+ | |||
[[category:intel]] | [[category:intel]] |
Revision as of 22:26, 6 November 2018
DL Boost is a marketing term used by Intel that encompasses a number of x86 technologies designed to accelerate AI workloads.
Overview
DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake. DL Boost includes the following features:
- AVX-512 Vector Neural Network Instructions (AVX512VNNI), first introduced with Cascade Lake
- Brain floating-point format (bfloat16), first introduced with Cooper Lake