From WikiChip
Difference between revisions of "intel/dl boost"
< intel

Line 5: Line 5:
 
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features:
 
'''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features:
  
* {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}}
+
* {{x86|AVX512VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
* [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}}
+
* [[Brain floating-point format]] (bfloat16), first introduced with {{intel|Cooper Lake|l=arch}} and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client)
  
  
 
[[category:intel]]
 
[[category:intel]]

Revision as of 08:08, 15 January 2019

DL Boost (deep learning boost) is a marketing term used by Intel that encompasses a number of x86 technologies designed to accelerate AI workloads.

Overview

DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake. DL Boost includes the following features: