From WikiChip
Difference between revisions of "intel/dl boost"
Line 1: | Line 1: | ||
{{intel title|DL Boost}} | {{intel title|DL Boost}} | ||
− | '''DL Boost''' ('''deep learning boost''') is a | + | '''DL Boost''' ('''deep learning boost''') is a name used by [[Intel]] to encompass a number of [[x86]] technologies designed to [[accelerate]] AI workloads, specifically for the acceleration of [[inference]]. |
== Overview == | == Overview == |
Revision as of 01:24, 8 May 2019
DL Boost (deep learning boost) is a name used by Intel to encompass a number of x86 technologies designed to accelerate AI workloads, specifically for the acceleration of inference.
Overview
DL Boost is a term used by Intel to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with Cascade Lake. DL Boost includes the following features:
- AVX-512 Vector Neural Network Instructions (AVX512VNNI), first introduced with Cascade Lake (server) and Ice Lake (Client)
- Brain floating-point format (bfloat16), first introduced with Cooper Lake and Ice Lake (Client)