From WikiChip
Editing intel/dl boost
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
{{intel title|DL Boost}} | {{intel title|DL Boost}} | ||
− | '''DL Boost | + | '''DL Boost''' ('''deep learning boost''') is a name used by [[Intel]] to encompass a number of [[x86]] technologies designed to [[accelerate]] AI workloads, specifically for the acceleration of [[inference]]. |
== Overview == | == Overview == | ||
− | '''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}} | + | '''DL Boost''' is a term used by [[Intel]] to describe a set of features on their microprocessors designed to accelerate AI workloads. The term was first introduced with {{intel|Cascade Lake|l=arch}}. DL Boost includes the following features: |
− | + | * {{x86|AVX512_VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512_VNNI), first introduced with {{intel|Cascade Lake|l=arch}} (server) and {{intel|Ice Lake (Client)|Ice Lake|l=arch}} (Client) | |
− | + | * {{x86|AVX512_BF16|AVX-512 BFloat16 Instructions}} (AVX512_BF16), first introduced with {{intel|Cooper Lake|l=arch}}. This extension offers instructions for converting to [[bfloat16]] and then performing multiply-accumulate on two bfloat16 values. | |
− | * {{x86|AVX512_VNNI|AVX-512 Vector Neural Network Instructions}} (AVX512_VNNI) | ||
− | * {{x86|AVX512_BF16|AVX-512 BFloat16 Instructions}} (AVX512_BF16) | ||
− | |||
== Implementations == | == Implementations == | ||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
− | ! Microarchitecture !! {{x86|AVX512_VNNI}} !! {{x86|AVX512_BF16}} | + | ! Microarchitecture !! {{x86|AVX512_VNNI}} !! {{x86|AVX512_BF16}} |
− | |||
− | |||
|- | |- | ||
− | | | + | ! colspan="3" | Client |
|- | |- | ||
− | + | | {{intel|Ice Lake (Client)|l=arch}} || {{tchk|yes}} || {{tchk|no}} | |
|- | |- | ||
− | + | ! colspan="3" | Server | |
|- | |- | ||
− | | {{intel| | + | | {{intel|Cascade Lake|l=arch}} || {{tchk|yes}} || {{tchk|no}} |
|- | |- | ||
− | | {{intel| | + | | {{intel|Cooper Lake|l=arch}} || {{tchk|yes}} || {{tchk|yes}} |
|- | |- | ||
− | | {{intel| | + | | {{intel|Ice Lake (Server)|l=arch}} || {{tchk|yes}} || {{tchk|no}} |
|- | |- | ||
− | | {{intel| | + | | {{intel|Sapphire Rapids|l=arch}} || {{tchk|yes}} || {{tchk|yes}} |
|- | |- | ||
− | | {{intel| | + | | {{intel|Granite Rapids|l=arch}} || {{tchk|some|TBD}} || {{tchk|some|TBD}} |
|} | |} | ||