From WikiChip
Editing intel/microarchitectures/skylake (client)
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
− | {{intel title|Skylake | + | {{intel title|Skylake|arch}} |
{{microarchitecture | {{microarchitecture | ||
− | |atype=CPU | + | | atype = CPU |
− | |name=Skylake | + | | name = Skylake |
− | |designer=Intel | + | | designer = Intel |
− | |manufacturer=Intel | + | | manufacturer = Intel |
− | |introduction=August 5, 2015 | + | | introduction = August 5, 2015 |
− | |process=14 nm | + | | phase-out = |
− | |cores=2 | + | | process = 14 nm |
− | |cores 2=4 | + | | cores = 2 |
− | | | + | | cores 2 = 4 |
− | |type | + | | cores 3 = 6 |
− | | | + | | cores 4 = 8 |
− | |speculative=Yes | + | | cores 5 = 10 |
− | |renaming=Yes | + | |
− | |stages min=14 | + | | pipeline = Yes |
− | |stages max=19 | + | | type = Superscalar |
− | | | + | | OoOE = Yes |
− | |extension 2=MMX | + | | speculative = Yes |
− | |extension 3=SSE | + | | renaming = Yes |
− | |extension 4=SSE2 | + | | isa = IA-32 |
− | |extension 5=SSE3 | + | | isa 2 = x86-64 |
− | |extension 6=SSSE3 | + | | stages min = 14 |
− | |extension 7=SSE4.1 | + | | stages max = 19 |
− | |extension 8=SSE4.2 | + | | issues = 5 |
− | |extension 9=POPCNT | + | |
− | |extension 10=AVX | + | | inst = Yes |
− | |extension 11=AVX2 | + | | feature = |
− | |extension 12=AES | + | | extension = MOVBE |
− | |extension 13=PCLMUL | + | | extension 2 = MMX |
− | |extension 14=FSGSBASE | + | | extension 3 = SSE |
− | |extension 15=RDRND | + | | extension 4 = SSE2 |
− | |extension 16=FMA3 | + | | extension 5 = SSE3 |
− | |extension 17=F16C | + | | extension 6 = SSSE3 |
− | |extension 18=BMI | + | | extension 7 = SSE4.1 |
− | |extension 19=BMI2 | + | | extension 8 = SSE4.2 |
− | |extension 20=VT-x | + | | extension 9 = POPCNT |
− | |extension 21=VT-d | + | | extension 10 = AVX |
− | |extension 22=TXT | + | | extension 11 = AVX2 |
− | |extension 23=TSX | + | | extension 12 = AES |
− | |extension 25=ADCX | + | | extension 13 = PCLMUL |
− | |extension 27=CLFLUSHOPT | + | | extension 14 = FSGSBASE |
− | |extension 28=XSAVE | + | | extension 15 = RDRND |
− | |l1i=32 KiB | + | | extension 16 = FMA3 |
− | |l1i per=core | + | | extension 17 = F16C |
− | |l1i desc=8-way set associative | + | | extension 18 = BMI |
− | |l1d=32 KiB | + | | extension 19 = BMI2 |
− | |l1d per=core | + | | extension 20 = VT-x |
− | |l1d desc=8-way set associative | + | | extension 21 = VT-d |
− | |l2=256 KiB | + | | extension 22 = TXT |
− | |l2 per=core | + | | extension 23 = TSX |
− | |l2 desc=4-way set associative | + | | extension 24 = RDSEED |
− | |l3=2 MiB | + | | extension 25 = ADCX |
− | |l3 per=core | + | | extension 26 = PREFETCHW |
− | |l3 desc=Up to 16-way set associative | + | | extension 27 = CLFLUSHOPT |
− | | | + | | extension 28 = XSAVE |
− | | | + | | extension 29 = SGX |
− | | | + | | extension 30 = MPX |
− | |core name=Skylake Y | + | | extension 31 = AVX-512 |
− | |core name 2=Skylake U | + | |
− | |core name 3=Skylake H | + | | cache = Yes |
− | |core name 4=Skylake S | + | | l1i = 32 KiB |
− | |core name 5=Skylake | + | | l1i per = core |
− | |predecessor=Broadwell | + | | l1i desc = 8-way set associative |
− | |predecessor link=intel/microarchitectures/broadwell | + | | l1d = 32 KiB |
− | |successor=Kaby Lake | + | | l1d per = core |
− | |successor link=intel/microarchitectures/kaby lake | + | | l1d desc = 8-way set associative |
− | + | | l2 = 256 KiB | |
− | + | | l2 per = core | |
− | + | | l2 desc = 4-way set associative | |
− | + | | l3 = 2 MiB | |
− | + | | l3 per = core | |
− | + | | l3 desc = Up to 16-way set associative | |
+ | | l4 = 128 MiB | ||
+ | | l4 per = package | ||
+ | | l4 desc = on Iris Pro GPUs only | ||
+ | |||
+ | | core names = Yes | ||
+ | | core name = Skylake Y | ||
+ | | core name 2 = Skylake U | ||
+ | | core name 3 = Skylake H | ||
+ | | core name 4 = Skylake S | ||
+ | | core name 5 = Skylake X | ||
+ | | core name 6 = Skylake W | ||
+ | |||
+ | | succession = Yes | ||
+ | | predecessor = Broadwell | ||
+ | | predecessor link = intel/microarchitectures/broadwell | ||
+ | | successor = Kaby Lake | ||
+ | | successor link = intel/microarchitectures/kaby lake | ||
}} | }} | ||
− | '''Skylake''' ('''SKL''') | + | '''Skylake''' ('''SKL''') is [[Intel]]'s successor to {{\\|Broadwell}}, a [[14 nm process]] [[microarchitecture]] for mainstream desktops, servers, and mobile devices. Skylake succeeded the short-lived {{\\|Broadwell}} which experienced severe delays. Skylake is the "Architecture" phase as part of Intel's {{intel|PAO}} model. The microarchitecture was developed by Intel's R&D center in [[wikipedia:Haifa, Israel|Haifa, Israel]]. |
− | For desktop and mobile, Skylake is branded as 6th Generation Intel {{intel|Core i3}}, {{intel|Core i5}} | + | For desktop and mobile, Skylake is branded as 6th Generation Intel {{intel|Core i3}}, {{intel|Core i5}}. and {{intel|Core i7}} processors. For workstations it's branded as {{intel|Xeon E3|Xeon E3 v5}} For scalable server class processors, Intel branded it as {{intel|Xeon Bronze}}, {{intel|Xeon Silver}}, {{intel|Xeon Gold}}, and {{intel|Xeon Platinum}}. |
== Codenames == | == Codenames == | ||
− | |||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
− | ! Core !! Abbrev | + | ! Core !! Abbrev !! Target |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
− | | | + | | Skylake Y || SKL-Y || 2-in-1s detachable, tablets, and computer sticks |
|- | |- | ||
− | | | + | | Skylake U || SKL-U || Light notebooks, portable All-in-Ones (AiOs), Minis, and conference room |
|- | |- | ||
− | | | + | | Skylake H || SKL-H || Ultimate mobile performance, mobile workstations |
|- | |- | ||
− | | | + | | Skylake S || SKL-S || Desktop performance to value, AiOs, and minis |
|- | |- | ||
− | | | + | | Skylake X || SKL-X || High-end desktops & enthusiasts market |
|- | |- | ||
− | | | + | | Skylake W || SKL-W || Workstations |
|} | |} | ||
Line 132: | Line 117: | ||
== Process Technology == | == Process Technology == | ||
{{main|intel/microarchitectures/broadwell#Process_Technology|l1=Broadwell § Process Technology}} | {{main|intel/microarchitectures/broadwell#Process_Technology|l1=Broadwell § Process Technology}} | ||
− | Skylake uses the same [[14 nm process]] used for the Broadwell microarchitecture | + | Skylake uses the same [[14 nm process]] used for the Broadwell microarchitecture. |
== Compatibility == | == Compatibility == | ||
Line 138: | Line 123: | ||
! Vendor !! OS !! Version !! Notes | ! Vendor !! OS !! Version !! Notes | ||
|- | |- | ||
− | | rowspan="4" | | + | | rowspan="4" | Microsoft || rowspan="4" | Windows || style="background-color: #ffdad6;" | Windows Vista || No Support |
|- | |- | ||
− | | style="background-color: #d6ffd8;" | Windows 7 || rowspan="2" | Support ends July | + | | style="background-color: #d6ffd8;" | Windows 7 || rowspan="2" | Support ends July 2017 |
|- | |- | ||
| style="background-color: #d6ffd8;" | Windows 8.1 | | style="background-color: #d6ffd8;" | Windows 8.1 | ||
Line 165: | Line 150: | ||
|- | |- | ||
| [[Visual Studio]] || <code>/arch:AVX2</code> || <code>/tune:skylake</code> | | [[Visual Studio]] || <code>/arch:AVX2</code> || <code>/tune:skylake</code> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
Line 184: | Line 156: | ||
=== Key changes from {{\\|Broadwell}} === | === Key changes from {{\\|Broadwell}} === | ||
− | [[File:skylake buff window.png|right| | + | [[File:skylake buff window.png|right|500px]] |
* 8x performance/watt over {{\\|Nehalem}} (Up from 3.5x in {{\\|Haswell}}) | * 8x performance/watt over {{\\|Nehalem}} (Up from 3.5x in {{\\|Haswell}}) | ||
* Mainstream chipset | * Mainstream chipset | ||
Line 191: | Line 163: | ||
*** {{intel|Direct Media Interface|DMI 3.0}} (from 2.0) | *** {{intel|Direct Media Interface|DMI 3.0}} (from 2.0) | ||
**** Skylake S and Skylake H cores, connected by 4-lane DMI 3.0 | **** Skylake S and Skylake H cores, connected by 4-lane DMI 3.0 | ||
− | **** | + | **** Skylake Y and Skylake U cores have chipset in the same package (simplified {{intel|on Package I/O|OPIO}}) |
**** Increase in transfer rate from 5.0 GT/s to 8.0 GT/s (~3.93GB/s up from 2GB/s) per lane | **** Increase in transfer rate from 5.0 GT/s to 8.0 GT/s (~3.93GB/s up from 2GB/s) per lane | ||
− | **** Limits motherboard trace design to 7 inches max from | + | **** Limits motherboard trace design to 7 inches max from (down from 8) from the CPU to chipset |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
* [[System Agent]] | * [[System Agent]] | ||
** New Image Processing Unit (IPU) | ** New Image Processing Unit (IPU) | ||
*** Incorporates an [[image signal processor]] (ISP) | *** Incorporates an [[image signal processor]] (ISP) | ||
*** Mobile client models only | *** Mobile client models only | ||
− | |||
* Core | * Core | ||
** Front End | ** Front End | ||
+ | *** Larger legacy pipeline delivery (5 µOPs, up from 4) | ||
+ | **** Another simple decoder has been added. | ||
*** Allocation Queue (IDQ) | *** Allocation Queue (IDQ) | ||
− | |||
**** Larger delivery (6 µOPs, up from 4) | **** Larger delivery (6 µOPs, up from 4) | ||
**** 2.28x larger buffer (64/thread, up from 56) | **** 2.28x larger buffer (64/thread, up from 56) | ||
Line 222: | Line 187: | ||
*** Larger [[re-order buffer]] (224 entries, up from 192) | *** Larger [[re-order buffer]] (224 entries, up from 192) | ||
*** Larger scheduler (97 entries, up from 64) | *** Larger scheduler (97 entries, up from 64) | ||
− | **** Larger Integer Register File (180 entries, up from | + | **** Larger Integer Register File (180 entries, up from 160) |
− | **** Larger Retire ( | + | **** Larger Retire (''WikiChip Speculation''; undisclosed by Intel) |
** Memory Subsystem | ** Memory Subsystem | ||
*** Larger store buffer (56 entries, up from 42) | *** Larger store buffer (56 entries, up from 42) | ||
*** [[L2$]] was changed from 8-way to 4-way set associative | *** [[L2$]] was changed from 8-way to 4-way set associative | ||
*** Page split load penalty reduced 20-fold | *** Page split load penalty reduced 20-fold | ||
− | |||
* Memory | * Memory | ||
Line 248: | Line 212: | ||
* Testability | * Testability | ||
** New support for {{intel|Direct Connect Interface}} (DCI), a new debugging transport protocol designed to allow debugging of closed cases (e.g. laptops, embedded) by accessing things such as [[JTAG]] through any [[USB 3]] port. | ** New support for {{intel|Direct Connect Interface}} (DCI), a new debugging transport protocol designed to allow debugging of closed cases (e.g. laptops, embedded) by accessing things such as [[JTAG]] through any [[USB 3]] port. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==== CPU changes ==== | ==== CPU changes ==== | ||
− | * | + | * Most ALU operations have 4 op/cycle 1 for 8 and 32-bit registers. 64-bit ops are still limited to 3 op/cycle. (16-bit throughput varies per op, can be 4, 3.5 or 2 op/cycle). |
− | * | + | * MOVSX and MOVZX have 4 op/cycle throughput for 16->32 and 32->64 forms, in addition to Haswell's 8->32, 8->64 and 16->64 bit forms. |
− | * Vector moves have throughput of 4 op/cycle ( | + | * ADC and SBB have throughput of 1 op/cycle, same as Haswell. |
− | * | + | * Vector moves have throughput of 4 op/cycle (move elimination). |
− | * Vector ALU ops are often "standardized" to latency of 4. for example, vADDPS and vMULPS used to have L of 3 and 5 | + | * Not only zeroing vector vpXORxx and vpSUBxx ops, but also vPCMPxxx on the same register, have throughput of 4 op/cycle. |
− | * Fused multiply-add ops have latency of 4 and throughput of 0.5 op/cycle | + | * Vector ALU ops are often "standardized" to latency of 4. for example, vADDPS and vMULPS used to have L of 3 and 5, now both are 4. |
− | * Throughput of vADDps, vSUBps, vCMPps, vMAXps, their scalar and double analogs is increased to 2 op/cycle | + | * Fused multiply-add ops have latency of 4 and throughput of 0.5 op/cycle. |
− | * Throughput of vPSLxx and vPSRxx with immediate (i.e. fixed vector shifts) is increased to 2 op/cycle | + | * Throughput of vADDps, vSUBps, vCMPps, vMAXps, their scalar and double analogs is increased to 2 op/cycle. |
+ | * Throughput of vPSLxx and vPSRxx with immediate (i.e. fixed vector shifts) is increased to 2 op/cycle. | ||
* Throughput of vANDps, vANDNps, vORps, vXORps, their scalar and double analogs, vPADDx, vPSUBx is increased to 3 op/cycle. | * Throughput of vANDps, vANDNps, vORps, vXORps, their scalar and double analogs, vPADDx, vPSUBx is increased to 3 op/cycle. | ||
* vDIVPD, vSQRTPD have approximately twice as good throughput: from 8 to 4 and from 28 to 12 cycles/op. | * vDIVPD, vSQRTPD have approximately twice as good throughput: from 8 to 4 and from 28 to 12 cycles/op. | ||
* Throughput of some MMX ALU ops (such as PAND mm1, mm2) is decreased to 2 or 1 op/cycle (users are expected to use wider SSE/AVX registers instead). | * Throughput of some MMX ALU ops (such as PAND mm1, mm2) is decreased to 2 or 1 op/cycle (users are expected to use wider SSE/AVX registers instead). | ||
+ | |||
+ | ===== New GPU Features & Changes ===== | ||
+ | * Adaptive scalable texture compression (ASTC) | ||
+ | * 16x multi-sample anti-aliasing (MSAA) | ||
+ | * Post depth test coverage mask | ||
+ | * Floating point atomics (min/max/cmpexch) | ||
+ | * Min/max texture filtering | ||
+ | * Multi-plane overlays | ||
+ | |||
+ | ==== Graphics ==== | ||
+ | * Improved underlying implementation of the memory QoS for higher resolution displays and the integrated [[image signal processor]] (ISP) | ||
+ | ** Allow for higher concurrent bandwidth | ||
+ | * Skylake retires VGA support, multi-monitor support for up to 3 displays via HDMI 1.4, DP 1.2, and eDP 1.3 interfaces. | ||
+ | * Direct X 12 | ||
+ | * OpenCL 2.0 | ||
+ | * OpenGL 4.4 | ||
+ | * Up to 24 EUs GT2 (same as {{\\|Haswell}}); 48 EUs for GT3, and up to 72 EUs on {{intel|Iris Pro Graphics}} | ||
+ | ** 1,152 GFLOPS | ||
+ | |||
+ | :{| class="wikitable" | ||
+ | |- | ||
+ | ! [[integrated graphics processor|IGP]] !! Execution Units !! GT !! eDRAM !! Series (Y/U/H/S) | ||
+ | |- | ||
+ | | {{intel|HD Graphics}} || 12 || 2+1 || - || Y | ||
+ | |- | ||
+ | | {{intel|HD Graphics 510}} || 12 || 2+2 || - || U/S | ||
+ | |- | ||
+ | | {{intel|HD Graphics 515}} || 24 || 2+2 || - || Y | ||
+ | |- | ||
+ | | {{intel|HD Graphics 520}} || 24 || 4+2<br>2+2 || - || U | ||
+ | |- | ||
+ | | {{intel|HD Graphics 530}} || 24 || 4+2<br>2+2 || - || H/S | ||
+ | |- | ||
+ | | {{intel|HD Graphics P530}} || 24 || 4+2 || - || H | ||
+ | |- | ||
+ | | {{intel|Iris Graphics 540}} || 48 || 2+3e || 64 MiB || U | ||
+ | |- | ||
+ | | {{intel|Iris Graphics 550}} || 48 || 2+3e || 64 MiB || U | ||
+ | |- | ||
+ | | {{intel|Iris Pro Graphics 580}} || 72 || 4+4e || 128 MiB || H | ||
+ | |} | ||
====New instructions ==== | ====New instructions ==== | ||
− | {{ | + | {{main|#Added instructions|l1=See §Added instructions for the complete list}} |
− | Skylake introduced a number of | + | Skylake introduced a number of new instructions: |
− | + | * {{x86|SGX|<code>SGX</code>}} - Software Guard Extensions | |
− | * {{x86| | ||
* {{x86|MPX|<code>MPX</code>}} -Memory Protection Extensions | * {{x86|MPX|<code>MPX</code>}} -Memory Protection Extensions | ||
− | * {{x86| | + | * {{x86|AVX-512|<code>AVX-512</code>}} - Advanced Vector Extensions 512 (Only on high-end {{intel|Xeon}} models (SKX)) |
− | |||
− | |||
=== Block Diagram === | === Block Diagram === | ||
+ | ==== Client SoC ==== | ||
− | ==== Entire SoC Overview (dual) ==== | + | ====== Entire SoC Overview (dual) ====== |
− | [[File:skylake soc block diagram (dual).svg| | + | [[File:skylake soc block diagram (dual).svg|900px]] |
− | ==== Entire SoC Overview (quad) ==== | + | ====== Entire SoC Overview (quad) ====== |
[[File:skylake soc block diagram.svg|900px]] | [[File:skylake soc block diagram.svg|900px]] | ||
− | ==== Individual Core ==== | + | ====== Individual Core ====== |
− | [[File:skylake block diagram.svg | + | [[File:skylake block diagram.svg]] |
− | ==== Gen9 ==== | + | ====== Gen9 ====== |
See {{intel|Gen9#Gen9|l=arch}}. | See {{intel|Gen9#Gen9|l=arch}}. | ||
+ | |||
+ | ==== Server MPUs ==== | ||
+ | {{future information}} | ||
+ | |||
+ | Intel has not disclosed the details of the Skylake server configuration. | ||
=== Memory Hierarchy === | === Memory Hierarchy === | ||
Other than a few organizational changes (e.g. L2$ went from 8-way to 4-way set associative), the overall memory structure is identical to {{\\|Broadwell}}/{{\\|Haswell}}. | Other than a few organizational changes (e.g. L2$ went from 8-way to 4-way set associative), the overall memory structure is identical to {{\\|Broadwell}}/{{\\|Haswell}}. | ||
− | |||
− | |||
* Cache | * Cache | ||
** L0 µOP cache: | ** L0 µOP cache: | ||
Line 305: | Line 304: | ||
** L1I Cache: | ** L1I Cache: | ||
*** 32 [[KiB]], 8-way set associative | *** 32 [[KiB]], 8-way set associative | ||
− | **** | + | **** 64 B line size |
**** shared by the two threads, per core | **** shared by the two threads, per core | ||
** L1D Cache: | ** L1D Cache: | ||
*** 32 KiB, 8-way set associative | *** 32 KiB, 8-way set associative | ||
− | *** | + | *** 64 B line size |
*** shared by the two threads, per core | *** shared by the two threads, per core | ||
*** 4 cycles for fastest load-to-use (simple pointer accesses) | *** 4 cycles for fastest load-to-use (simple pointer accesses) | ||
Line 317: | Line 316: | ||
*** Write-back policy | *** Write-back policy | ||
** L2 Cache: | ** L2 Cache: | ||
− | *** | + | *** unified, 256 KiB, 4-way set associative |
− | *** | + | *** 64 B line size |
− | |||
*** 12 cycles for fastest load-to-use | *** 12 cycles for fastest load-to-use | ||
*** 64 B/cycle bandwidth to L1$ | *** 64 B/cycle bandwidth to L1$ | ||
Line 361: | Line 359: | ||
**** fixed partition | **** fixed partition | ||
*** 1G page translations: | *** 1G page translations: | ||
− | **** 4 entries; | + | **** 4 entries; fully associative |
**** fixed partition | **** fixed partition | ||
** STLB | ** STLB | ||
Line 370: | Line 368: | ||
**** 16 entries; 4-way set associative | **** 16 entries; 4-way set associative | ||
**** fixed partition | **** fixed partition | ||
− | |||
− | |||
− | |||
− | |||
== Overview == | == Overview == | ||
Line 397: | Line 391: | ||
The Skylake [[system on a chip]] consists of a five major components: CPU core, [[last level cache|LLC]], Ring interconnect, System agent, and the [[integrated graphics]]. The image shown on the right, presented by Intel at the Intel Developer Forum in 2015, represents a hypothetical model incorporating all available features Skylake has to offer (i.e. [[superset]] of features). Skylake features an improved core (see [[#Pipeline|§ Pipeline]]) with higher performance per watt and higher performance per clock. The number of cores depends on the model, but mainstream mobile models are typically [[dual-core]] while mainstream desktop models are typically [[quad-core]] with dual-core desktop models still offered for value models (e.g. {{intel|Celeron}}). Accompanying the cores is the LCC ([[last level cache]] or [[L3$]] as seen from the CPU perspective). On mainstream parts the LLC consists of 2 MiB for each core with lower amounts for value models. Connecting the cores together is the ring interconnect. The ring extends to the GPU and the system agent as well. Intel further optimized the ring in Skylake for low-power and higher bandwidth. | The Skylake [[system on a chip]] consists of a five major components: CPU core, [[last level cache|LLC]], Ring interconnect, System agent, and the [[integrated graphics]]. The image shown on the right, presented by Intel at the Intel Developer Forum in 2015, represents a hypothetical model incorporating all available features Skylake has to offer (i.e. [[superset]] of features). Skylake features an improved core (see [[#Pipeline|§ Pipeline]]) with higher performance per watt and higher performance per clock. The number of cores depends on the model, but mainstream mobile models are typically [[dual-core]] while mainstream desktop models are typically [[quad-core]] with dual-core desktop models still offered for value models (e.g. {{intel|Celeron}}). Accompanying the cores is the LCC ([[last level cache]] or [[L3$]] as seen from the CPU perspective). On mainstream parts the LLC consists of 2 MiB for each core with lower amounts for value models. Connecting the cores together is the ring interconnect. The ring extends to the GPU and the system agent as well. Intel further optimized the ring in Skylake for low-power and higher bandwidth. | ||
− | Accompanying the cores is the {{\\|Gen9}} [[integrated graphics]] unit which comes in a number of different tiers ranging from just 12 execution units (used in the ultra-low power models) all the way the GT4 ({{\\|gen9#Scalability|Gen9 § Pipeline}}) with 72 execution units boasting a peak performance of up to 2,534.4 GFLOPS (HF) / 1,267.2 GFLOPS (SP) on the highest-end workstation model. The two highest-tier models are also accompanied by dedicated [[eDRAM]] ranging from 64 to | + | Accompanying the cores is the {{\\|Gen9}} [[integrated graphics]] unit which comes in a number of different tiers ranging from just 12 execution units (used in the ultra-low power models) all the way the GT4 ({{\\|gen9#Scalability|Gen9 § Pipeline}}) with 72 execution units boasting a peak performance of up to 2,534.4 GFLOPS (HF) / 1,267.2 GFLOPS (SP) on the highest-end workstation model. The two highest-tier models are also accompanied by dedicated [[eDRAM]] ranging from 64 GiB to 120 GiB in capacity. The eDRAM is packaged along with the SoC in the same package. |
On the other side is the {{intel|System Agent}} (SA) which houses the various functionality that's not directly related to the cores or graphics. Skylake features an upgraded [[integrated memory controller]] (IMC) with most mainstream models supporting faster memory and dual-channel [[DDR4]]. The SA in Skylake also includes the [[Display Controller]] which now supports higher resolution displays with up to three displays for all mainstream models. | On the other side is the {{intel|System Agent}} (SA) which houses the various functionality that's not directly related to the cores or graphics. Skylake features an upgraded [[integrated memory controller]] (IMC) with most mainstream models supporting faster memory and dual-channel [[DDR4]]. The SA in Skylake also includes the [[Display Controller]] which now supports higher resolution displays with up to three displays for all mainstream models. | ||
Line 416: | Line 410: | ||
Intel has been experiencing a growing divergence in functionality over the last number of iterations of [[intel/microarchitectures|their microarchitecture]] between their mainstream consumer products and their high-end HPC/server models. Traditionally, Intel has been using the same exact core design for everything from their lowest end value models (e.g. {{intel|Celeron}}) all the way up to the highest-performance enterprise models (e.g. {{intel|Xeon E7}}). While the two have fundamentally different chip architectures, they use the same exact CPU core architecture as the building block. | Intel has been experiencing a growing divergence in functionality over the last number of iterations of [[intel/microarchitectures|their microarchitecture]] between their mainstream consumer products and their high-end HPC/server models. Traditionally, Intel has been using the same exact core design for everything from their lowest end value models (e.g. {{intel|Celeron}}) all the way up to the highest-performance enterprise models (e.g. {{intel|Xeon E7}}). While the two have fundamentally different chip architectures, they use the same exact CPU core architecture as the building block. | ||
− | This design philosophy has changed with Skylake. In order to better accommodate the different functionalities of each segment without sacrificing features or making unnecessary compromises Intel went with a configurable core. The Skylake core is a single development project, making up a master superset core. The project result in two derivatives: | + | This design philosophy has changed with Skylake. In order to better accommodate the different functionalities of each segment without sacrificing features or making unnecessary compromises Intel went with a configurable core. The Skylake core is a single development project, making up a master superset core. The project result in two derivatives: one for servers and one for clients. All mainstream models (from {{intel|Celeron}}/{{intel|Pentium (2009)|Pentium}} all the way up to {{intel|Core i7}}/{{intel|Xeon E3}}) use the client core configuration. Server models (e.g. {{intel|Xeon E5}}/{{intel|Xeon E7}}) will be using the new server configuration. |
− | The server core | + | The exact server core details have not been disclosed yet, however it's expected to feature [[Advanced Vector Extensions 512]] (AVX-512). |
=== Pipeline === | === Pipeline === | ||
Line 434: | Line 428: | ||
Some µOPs deal with memory access (e.g. [[instruction load|load]] & [[instruction store|store]]). Those will be sent on dedicated scheduler ports that can perform those memory operations. Store operations go to the store buffer which is also capable of performing forwarding when needed. Likewise, Load operations come from the load buffer. Skylake features a dedicated 32 KiB level 1 data cache and a dedicated 32 KiB level 1 instruction cache. It also features a core-private 256 KiB L2 cache that is shared by both of the L1 caches. | Some µOPs deal with memory access (e.g. [[instruction load|load]] & [[instruction store|store]]). Those will be sent on dedicated scheduler ports that can perform those memory operations. Store operations go to the store buffer which is also capable of performing forwarding when needed. Likewise, Load operations come from the load buffer. Skylake features a dedicated 32 KiB level 1 data cache and a dedicated 32 KiB level 1 instruction cache. It also features a core-private 256 KiB L2 cache that is shared by both of the L1 caches. | ||
− | Each core enjoys a slice of a third level of cache that is shared by all the core. | + | Each core enjoys a slice of a third level of cache that is shared by all the core. In the client configuration for Skylake, there are either [[two cores]] or [[four cores]] connected while in the server configuration, up to [[28 cores]] may be hooked together on a single chip. |
{{clear}} | {{clear}} | ||
==== Front-end ==== | ==== Front-end ==== | ||
− | The front-end is tasked with the challenge of fetching the complex [[x86]] instructions from memory, decoding them, and delivering them to the execution units. In other words, the front end needs to be able to consistently deliver enough [[µOPs]] from the instruction code stream to keep the back-end busy. When the back-end is not being fully utilized, the core is not reaching its full performance. A poorly or under-performing front-end will translate directly to a poorly performing core. This challenge is further complicated by various redirection such as branches and the complex nature of the [[x86]] instructions themselves. | + | The front-end is is tasked with the challenge of fetching the complex [[x86]] instructions from memory, decoding them, and delivering them to the execution units. In other words, the front end needs to be able to consistently deliver enough [[µOPs]] from the instruction code stream to keep the back-end busy. When the back-end is not being fully utilized, the core is not reaching its full performance. A poorly or under-performing front-end will translate directly to a poorly performing core. This challenge is further complicated by various redirection such as branches and the complex nature of the [[x86]] instructions themselves. |
===== Fetch & pre-decoding ===== | ===== Fetch & pre-decoding ===== | ||
− | On their first pass, instructions should have already been prefetched from the [[L2 cache]] and into the [[L1 cache]]. The L1 is a 32 [[KiB]], 8-way set associative cache, identical in size and organization to {{intel|microarchitectures|previous generations}}. Skylake fetching is done on a 16-byte fetch window. A window size that has not changed in a number of generations. Up to 16 bytes of code can be fetched each | + | On their first pass, instructions should have already been prefetched from the [[L2 cache]] and into the [[L1 cache]]. The L1 is a 32 [[KiB]], 8-way set associative cache, identical in size and organization to {{intel|microarchitectures|previous generations}}. Skylake fetching is done on a 16-byte fetch window. A window size that has not changed in a number of generations. Up to 16 bytes of code can be fetched each cycle. At this point they are still [[macro-ops]] (i.e. variable-length [[x86]] architectural instruction). Instructions are brought into the pre-decode buffer for initial preparation. |
[[File:skylake fetch.svg|left|300px]] | [[File:skylake fetch.svg|left|300px]] | ||
− | [[x86]] instructions are complex, variable length, have inconsistent encoding, and may contain multiple operations. At the pre-decode buffer | + | [[x86]] instructions are complex, variable length, have inconsistent encoding, and may contain multiple operations. At the pre-decode buffer the instructions boundaries get detected and marked. This is a fairly difficult task because each instruction can vary from a single byte all the way up to fifteen. Moreover, determining the length requires inspecting a couple of bytes of the instruction. In addition boundary marking, prefixes are also decoded and checked for various properties such as branches. As with previous microarchitectures, the pre-decoder has a [[throughput]] of 6 [[macro-ops]] per cycle or until all 16 bytes are consumed, whichever happens first. Note that the predecoder will not load a new 16-byte block until the previous block has been fully exhausted. For example, suppose a new chunk was loaded, resulting in 7 instructions. In the first cycle, 6 instructions will be processed and a whole second cycle will be wasted for that last instruction. This will produce the much lower throughput of 3.5 instructions per cycle which is considerably less than optimal. Likewise, if the 16-byte block resulted in just 4 instructions with 1 byte of the 5th instruction received, the first 4 instructions will be processed in the first cycle and a second cycle will be required for the last instruction. This will produce an average throughput of 2.5 instructions per cycle. Note that there is a special case for {{x86|length-changing prefix}} (LCPs) which will incur additional pre-decoding costs. Real code is often less than 4 bytes which usually results in a good rate. |
All of this works along with the branch prediction unit which attempts to guess the flow of instructions. In Skylake, the [[branch predictor]] has also been improved. The branch predictor now has reduced penalty (i.e. lower latency) for wrong direct jump target prediction. Additionally, the predictor in Skylake can inspect further in the byte stream than in previous architectures. The intimate improvements done in the branch predictor were not further disclosed by Intel. | All of this works along with the branch prediction unit which attempts to guess the flow of instructions. In Skylake, the [[branch predictor]] has also been improved. The branch predictor now has reduced penalty (i.e. lower latency) for wrong direct jump target prediction. Additionally, the predictor in Skylake can inspect further in the byte stream than in previous architectures. The intimate improvements done in the branch predictor were not further disclosed by Intel. | ||
Line 458: | Line 452: | ||
| <pre>cmpjne eax, [mem], loop</pre> | | <pre>cmpjne eax, [mem], loop</pre> | ||
|} | |} | ||
− | {{see also| | + | {{see also|Macro-Operation Fusion}} |
− | The pre-decoded instructions are delivered to the Instruction Queue (IQ). In {{\\|Broadwell}}, the Instruction Queue has been increased to 25 entries duplicated over for each thread (i.e. 50 total entries). It's unclear if that has changed with Skylake. One key optimization the instruction queue does is [[macro-op fusion]]. Skylake can fuse two [[macro-ops]] into a single complex one in a number of cases. In cases where a {{x86|test}} or {{x86|compare}} instruction with a subsequent conditional jump is detected, it will be converted into a single compare-and-branch instruction. Those fused instructions remain fused throughout the entire pipeline and get executed as a single operation by the branch unit thereby saving bandwidth everywhere. Only one such fusion can be performed | + | The pre-decoded instructions are delivered to the Instruction Queue (IQ). In {{\\|Broadwell}}, the Instruction Queue has been increased to 25 entries duplicated over for each thread (i.e. 50 total entries). It's unclear if that has changed with Skylake. One key optimization the instruction queue does is [[macro-op fusion]]. Skylake can fuse two [[macro-ops]] into a single complex one in a number of cases. In cases where a {{x86|test}} or {{x86|compare}} instruction with a subsequent conditional jump is detected, it will be converted into a single compare-and-branch instruction. Those fused instructions remain fused throughout the entire pipeline and get executed as a single operation by the branch unit thereby saving bandwidth everywhere. Only one such fusion can be performed each cycle. |
===== Decoding ===== | ===== Decoding ===== | ||
[[File:skylake decode.svg|right|425px]] | [[File:skylake decode.svg|right|425px]] | ||
− | Up to | + | Up to five pre-decoded instructions are sent to the decoders each cycle. Decoders read in [[macro-operations]] and emit regular, fixed length [[µOPs]]. Skylake represents a big genealogical change from the last couple of microarchitectures. Skylake's pipeline is wider than it predecessors; Skylake adds another [[simple decoder]]. The five decoders are asymmetric; the first one, Decoder 0, is a [[complex decoder]] while the other four are [[simple decoders]]. A simple decoder is capable of translating instructions that emit a single fused-[[µOP]]. By contrast, a [[complex decoder]] can decode anywhere from one to four fused-µOPs. Skylake is now capable of decoding 5 macro-ops per cycle or 25% more than {{\\|Broadwell}}, however this does not translates directly to direct IPC uplift to due to various other more restricting points in the pipeline. Intel chose not increase the number of complex decoders because it's much harder to extract additional parallelism from the µOPs emitted by a complex instruction. Overall up to 5 simple instructions can be decoded each cycle with lesser amounts if the complex decoder needs to emit addition µOPs; i.e., for each additional µOP the complex decoder needs to emit, 1 less simple decoder can operate. In other words, for each additional µOP the complex decoder emits, one less decoder is active. |
====== MSROM & Stack Engine ====== | ====== MSROM & Stack Engine ====== | ||
There are more complex instructions that are not trivial to be decoded even by complex decoder. For instructions that transform into more than four µOPs, the instruction detours through the [[microcode sequencer]] (MS) ROM. When that happens, up to 4 µOPs/cycle are emitted until the microcode sequencer is done. During that time, the decoders are disabled. | There are more complex instructions that are not trivial to be decoded even by complex decoder. For instructions that transform into more than four µOPs, the instruction detours through the [[microcode sequencer]] (MS) ROM. When that happens, up to 4 µOPs/cycle are emitted until the microcode sequencer is done. During that time, the decoders are disabled. | ||
− | [[x86]] has dedicated [[stack machine]] operations. Instructions such as <code>{{x86|PUSH}}</code>, <code>{{x86|POP}}</code>, as well as <code>{{x86|CALL}}</code>, and <code>{{x86|RET}}</code> all operate on the [[stack pointer]] (<code>{{x86|ESP}}</code>). Without any specialized hardware, such operations would need to be sent to the back-end for execution using the general purpose ALUs, using up some of the bandwidth and utilizing scheduler and execution units resources. Since {{\\|Pentium M}}, Intel has been making use of a [[Stack Engine]]. The Stack Engine has a set of three dedicated adders it uses to perform and eliminate the stack-updating µOPs (i.e. capable of handling three additions per cycle). Instruction such as <code>{{x86|PUSH}}</code> are translated into a store and a subtraction of 4 from <code>{{x86|ESP}}</code>. The subtraction in this case will be done by the Stack Engine. The Stack Engine sits after the [[instruction decode|decoders]] and monitors the µOPs stream as it passes by. Incoming stack-modifying operations are caught by the Stack Engine. This operation alleviate the burden of the pipeline from stack pointer-modifying µOPs. In other words, it's cheaper and faster to calculate stack pointer targets at the Stack Engine than it is to send those operations down the pipeline to be done by the execution units (i.e., general purpose ALUs). | + | [[x86]] has dedicated [[stack machine]] operations. Instructions such as <code>{{x86|PUSH}}</code>, <code>{{x86|POP}}</code>, as well as <code>{{x86|CALL}}</code>, and <code>{{x86|RET}}</code> all operate on the [[stack pointer]] (<code>{{x86|ESP}}</code>). Without any specialized hardware, such operations would would need to be sent to the back-end for execution using the general purpose ALUs, using up some of the bandwidth and utilizing scheduler and execution units resources. Since {{\\|Pentium M}}, Intel has been making use of a [[Stack Engine]]. The Stack Engine has a set of three dedicated adders it uses to perform and eliminate the stack-updating µOPs (i.e. capable of handling three additions per cycle). Instruction such as <code>{{x86|PUSH}}</code> are translated into a store and a subtraction of 4 from <code>{{x86|ESP}}</code>. The subtraction in this case will be done by the Stack Engine. The Stack Engine sits after the [[instruction decode|decoders]] and monitors the µOPs stream as it passes by. Incoming stack-modifying operations are caught by the Stack Engine. This operation alleviate the burden of the pipeline from stack pointer-modifying µOPs. In other words, it's cheaper and faster to calculate stack pointer targets at the Stack Engine than it is to send those operations down the pipeline to be done by the execution units (i.e., general purpose ALUs). |
===== µOP cache & x86 tax ===== | ===== µOP cache & x86 tax ===== | ||
− | |||
[[File:skylake ucache.svg|right|400px]] | [[File:skylake ucache.svg|right|400px]] | ||
− | Decoding the variable-length, inconsistent, and complex [[x86]] instructions is a nontrivial task. It's also expensive in terms of performance and power. Therefore, the best way for the pipeline to avoid those things is to simply not decode the instructions. This is the job of the [[µOP cache]] or the Decoded Stream Buffer (DSB). Skylake's µOP cache is organized similarly to | + | Decoding the variable-length, inconsistent, and complex [[x86]] instructions is a nontrivial task. It's also expensive in terms of performance and power. Therefore, the best way for the pipeline to avoid those things is to simply not decode the instructions. This is the job of the [[µOP cache]] or the Decoded Stream Buffer (DSB). Skylake's µOP cache is organized similarly to previous generations like {{\\|Sandy Bridge}}, however both the bandwidth and the tracking window was increased. The cache is organized into 32 sets of 8 cache lines with each line holding up to 6 µOP for a total of 1,536 µOPs. Whereas previously (e.g. {{\\|Haswell}}) the µOP cache operated on 32-byte windows, in Skylake the window size has been doubled to 64-bytes. The micro-operation cache is competitively shared between the two threads and can also hold pointers to the microcode. The µOP cache has an average hit rate of 80%. |
− | A hit in the µOP allows for up to 6 | + | A hit in the µOP allows for up to 6 µOP (i.e., entire line) per cycle to be sent directly to the Instruction Decode Queue (IDQ), bypassing all the pre-decoding and decoding that would otherwise have to be done. Whereas the legacy decode path works in 16-byte instruction fetch windows, the µOP cache has no such restriction and can deliver 6 µOP/cycle corresponding to the much bigger 64-byte window. Previously (e.g., {{\\|Broadwell}}), the bandwidth was lower at 4 µOP per cycle. The 1.5x bandwidth increase greatly improves the numbers of µOP that the back-end can take advantage of in the [[out-of-order]] part of the machine. |
===== Allocation Queue ===== | ===== Allocation Queue ===== | ||
Line 481: | Line 474: | ||
====== µOP-Fusion & LSD ====== | ====== µOP-Fusion & LSD ====== | ||
− | The IDQ does a number of additional optimizations as it queues instructions. The Loop Stream Detector (LSD) is a mechanism inside the IDQ capable of detecting loops that fit in the IDQ and lock them down. That is, the LSD can stream the same sequence of µOPs directly from the IDQ continuously without any additional [[instruction fetch|fetching]], [[instruction decode|decoding]], or utilizing additional caches or resources. Streaming continues indefinitely until reaching a branch [[mis-prediction]] | + | The IDQ does a number of additional optimizations as it queues instructions. The Loop Stream Detector (LSD) is a mechanism inside the IDQ capable of detecting loops that fit in the IDQ and lock them down. That is, the LSD can stream the same sequence of µOPs directly from the IDQ continuously without any additional [[instruction fetch|fetching]], [[instruction decode|decoding]], or utilizing additional caches or resources. Streaming continues indefinitely until reaching a branch [[mis-prediction]]. |
The LSD in Skylake can take advantage of the considerably larger IDQ; capable of detecting loops up to 64 µOPs per thread. The LSD is particularly excellent in for many common algorithms that are found in many programs (e.g., tight loops, intensive calc loops, searches, etc..). | The LSD in Skylake can take advantage of the considerably larger IDQ; capable of detecting loops up to 64 µOPs per thread. The LSD is particularly excellent in for many common algorithms that are found in many programs (e.g., tight loops, intensive calc loops, searches, etc..). | ||
− | |||
− | |||
==== Execution engine ==== | ==== Execution engine ==== | ||
[[File:skylake rob.svg|right|450px]] | [[File:skylake rob.svg|right|450px]] | ||
− | Skylake's back-end or execution engine deals with the execution of [[out-of-order]] operations. Much of the design is inherited from previous architectures such as {{\\|Haswell}} but has been widened to explorer more [[instruction-level parallelism]] opportunities. From the allocation queue instructions are sent to the [[Reorder Buffer]] (ROB) at the rate of | + | Skylake's back-end or execution engine deals with the execution of [[out-of-order]] operations. Much of the design is inherited from previous architectures such as {{\\|Haswell}} but has been widened to explorer more [[instruction-level parallelism]] opportunities. From the allocation queue instructions are sent to the [[Reorder Buffer]] (ROB) at the rate of 6 µOPs each cycle. Skylake's throughput is up by 2 µOPs per cycle from {{\\|Broadwell}} in order to accommodate the wider front-end. |
===== Renaming & Allocation ===== | ===== Renaming & Allocation ===== | ||
− | Like the front-end, the [[Reorder Buffer]] has been increased to 224 entries, 32 entries more than {{\\|Broadwell}} | + | Like the front-end, the [[Reorder Buffer]] has been increased to 224 entries, 32 entries more than {{\\|Broadwell}}. It is at this stage that [[architectural registers]] are mapped onto the underlying [[physical registers]]. Other additional bookkeeping tasks are also done at this point such as allocating resources for stores, loads, and determining all possible scheduler ports. Register renaming is also controlled by the [[Register Alias Table]] (RAT) which is used to mark where the data we depend on is coming from (after that value, too, came from an instruction that has previously been renamed). In {{intel|microarchitectures|previous microarchitectures}}, the RAT could handle 4 µOPs each cycle. Intel has not disclosed if that has changed in Skylake but it's possible. If this has not change, Skylake can rename any four registers per cycle. This includes the same register renamed four times in a single cycle. If the rename has not increased in Skylake, some aspects of improvements that were done in the prefetch/decode stages are effectively lost. Note that the ROB still operates on fused µOPs, therefore 4 µOPs can effectively be as high as 8 µOPs. |
It should be noted that there is no special costs involved in splitting up fused µOPs before execution or [[retirement]] and the two fused µOPs only occupy a single entry in the ROB. | It should be noted that there is no special costs involved in splitting up fused µOPs before execution or [[retirement]] and the two fused µOPs only occupy a single entry in the ROB. | ||
Line 499: | Line 490: | ||
===== Optimizations ===== | ===== Optimizations ===== | ||
− | Skylake | + | Skylake as a number of optimizations it performs prior to entering the out-of-order and renaming part. Three of those optimizations include [[Move Elimination]] and [[Zeroing Idioms]], and [[Ones Idioms]]. A Move Elimination is capable of eliminating register-to-register moves (including chained moves) prior to bookkeeping at the ROB, allowing those µOPs to save resources and eliminating them entirely. Eliminated moves are zero latency and are entirely removed from the pipeline. This optimization does not always succeed; when it fails, the operands were simply not ready. On average this optimization is almost always successful (upward of 85% in most cases). Move elimination works on all 32- and 64-bit GP integer registers as well as all 128- and 256-bit vector registers. |
{| style="border: 1px solid gray; float: right; margin: 10px; padding: 5px; width: 350px;" | {| style="border: 1px solid gray; float: right; margin: 10px; padding: 5px; width: 350px;" | ||
| [[Zeroing Idiom]] Example: | | [[Zeroing Idiom]] Example: | ||
Line 505: | Line 496: | ||
| <pre>xor eax, eax</pre> | | <pre>xor eax, eax</pre> | ||
|- | |- | ||
− | | Not only does this instruction get eliminated at the ROB, but it's actually encoded as just 2 bytes <code>31 C0</code> vs the | + | | Not only does this instruction get eliminated at the ROB, but it's actually encoded as just 2 bytes <code>31 C0</code> vs the 4 bytes for <code>{{x86|mov}} {{x86|eax}}, 0x0</code> which is encoded as <code>b8 00 00 00 00</code> and is not eliminated. |
|} | |} | ||
There are some exceptions that Skylake will not optimize, most dealing with [[signedness]]. [[sign extension|sign-extended]] moves cannot be eliminated and neither can zero-extended from 16-bit to 32/64 big registers (note that 8-bit to 32/64 works). Likewise, in the other direction, no moves to 8/16-bit registers can be eliminated. A move of a register to itself is never eliminated. | There are some exceptions that Skylake will not optimize, most dealing with [[signedness]]. [[sign extension|sign-extended]] moves cannot be eliminated and neither can zero-extended from 16-bit to 32/64 big registers (note that 8-bit to 32/64 works). Likewise, in the other direction, no moves to 8/16-bit registers can be eliminated. A move of a register to itself is never eliminated. | ||
Line 511: | Line 502: | ||
When instructions use registers that are independent of their prior values, another optimization opportunity can be exploited. A second common optimization performed in Skylake around the same time is [[Zeroing Idioms]] elimination. A number common zeroing idioms are recognized and consequently eliminated in much the same way as the move eliminations are performed. Skylake recognizes instructions such as <code>{{x86|XOR}}</code>, <code>{{x86|PXOR}}</code>, and <code>{{x86|XORPS}}</code> as zeroing idioms when the [[source operand|source]] and [[destination operand|destination]] operands are the same. Those optimizations are done at the same rate as renaming during renaming (at 4 µOPs per cycle) and the register is simply set to zero. | When instructions use registers that are independent of their prior values, another optimization opportunity can be exploited. A second common optimization performed in Skylake around the same time is [[Zeroing Idioms]] elimination. A number common zeroing idioms are recognized and consequently eliminated in much the same way as the move eliminations are performed. Skylake recognizes instructions such as <code>{{x86|XOR}}</code>, <code>{{x86|PXOR}}</code>, and <code>{{x86|XORPS}}</code> as zeroing idioms when the [[source operand|source]] and [[destination operand|destination]] operands are the same. Those optimizations are done at the same rate as renaming during renaming (at 4 µOPs per cycle) and the register is simply set to zero. | ||
− | The [[ones idioms]] is another dependency breaking idiom that can be optimized. In all the various {{x86|PCMPEQ|PCMPEQx}} instructions that perform packed comparison the same register with itself always set all bits to one. On those cases, while the µOP still has to be executed, the instructions may be scheduled as soon as possible because the | + | The [[ones idioms]] is another dependency breaking idiom that can be optimized. In all the various {{x86|PCMPEQ|PCMPEQx}} instructions that perform packed comparison the same register with itself always set all bits to one. On those cases, while the µOP still has to be executed, the instructions may be scheduled as soon as possible because all the decencies are resolved. |
===== Scheduler ===== | ===== Scheduler ===== | ||
[[File:skylake scheduler.svg|right|500px]] | [[File:skylake scheduler.svg|right|500px]] | ||
− | The scheduler itself was increased by 50%; with up to 97 entries (from 64 in {{\\|Broadwell}}) being competitively shared between the two threads. Skylake continues with a unified design; this is in contrast to designs such as [[AMD]]'s {{amd|Zen|l=arch}} which uses a split design each one holding different types of µOPs. Scheduler includes the two register files for integers and vectors. It's in those [[register files]] that output operand data is | + | The scheduler itself was increased by 50%; with up to 97 entries (from 64 in {{\\|Broadwell}}) being competitively shared between the two threads. Skylake continues with a unified design; this is in contrast to designs such as [[AMD]]'s {{amd|Zen|l=arch}} which uses a split design each one holding different types of µOPs. Scheduler includes the two register files for integers and vectors. It's in those [[register files]] that output operand data is store. In Skylake, the [[integer]] [[register file]] was also slightly increased from 160 entries to 180. |
− | At this point µOPs are not longer fused and will be dispatched to the execution units independently. The scheduler holds the µOPs while they wait to be executed. A µOP could be waiting on an operand that has not arrived (e.g., fetched from memory or currently being calculated from another µOPs) or because the execution unit it needs is busy. Once the µOP is ready, | + | At this point µOPs are not longer fused and will be dispatched to the execution units independently. The scheduler holds the µOPs while they wait to be executed. A µOP could be waiting on an operand that has not arrived (e.g., fetched from memory or currently being calculated from another µOPs) or because the execution unit it needs is busy. Once the µOP is ready, they are dispatched through their designated port. The scheduler will send the oldest ready µOP to be executed on each of the eight ports each cycle. |
− | The scheduler had its ports rearranged to better balance various instructions. For example, divide and [[sqrt]] instructions latency and throughput were improved. The latency and throughput of [[floating point]] ADD, MUL, and FMA were made | + | The scheduler had its ports rearranged to better balance various instructions. For example, divide and [[sqrt]] instructions latency and throughput were improved. The latency and throughput of [[floating point]] ADD, MUL, and FMA were made uniformed at 4 cycles with a throughput of 2 µOPs/clock. Likewise the latency of {{x86|AES|AES instructions}} were significantly reduced from 7 cycles down to 4. |
====== Scheduler Ports & Execution Units ====== | ====== Scheduler Ports & Execution Units ====== | ||
Line 576: | Line 567: | ||
===== Retirement ===== | ===== Retirement ===== | ||
− | Once a µOP executes, or in the case of fused µOPs both µOPs have executed, they can be [[retired]]. {{\\|Haswell}} is able to commit up to four fused µOPs each cycle | + | Once a µOP executes, or in the case of fused µOPs both µOPs have executed, they can be [[retired]]. {{\\|Haswell}} is able to commit up to four fused µOPs each cycle; Skylake has likely increased this, however no information was disclosed by Intel. Retirement happens [[in-order]] and releases any used resources such as those used to keep track in the [[reorder buffer]]. |
==== Memory subsystem ==== | ==== Memory subsystem ==== | ||
[[File:skylake mem subsystem.svg|right|300px]] | [[File:skylake mem subsystem.svg|right|300px]] | ||
− | Skylake's memory subsystem is in charge of the loads and store requests and ordering. Since {{\\|Haswell}}, it's possible to sustain two memory reads (on ports 2 and 3) and one memory write (on port 4) each cycle. Each memory operation can be of any register size up to 256 bits. Skylake memory subsystem has been improved. The store buffer has been increased by | + | Skylake's memory subsystem is in charge of the loads and store requests and ordering. Since {{\\|Haswell}}, it's possible to sustain two memory reads (on ports 2 and 3) and one memory write (on port 4) each cycle. Each memory operation can be of any register size up to 256 bits. Skylake memory subsystem has been improved. The store buffer has been increased by 14 entries from {{\\|Broadwell}} to 56 for a total of 128 simultaneous memory operations in-flight or roughly 60% of all µOPs. Special care was taken to reduce the penalty for page-split loads; previously scenarios involving page-split loads were thought to be rarer than they actually are. This was addressed in Skylake with page-split loads are now made equal to other splits loads. Expect page split load penalty down to 5 cycles from 100 cycles in {{\\|Broadwell}}. The average latency to forward a load to store has also been improved and stores that miss in the L1$ generate L2$ requests to the next level cache much earlier in Skylake than before. |
− | The L2 to L1 bandwidth in Skylake is the same as {{\\|Haswell}} at 64 bytes per cycle in either direction. Note that one operation can be done each cycle; i.e., the L1 can either receive data from the | + | The L2 to L1 bandwidth in Skylake is the same as {{\\|Haswell}} at 64 bytes per cycle in either direction. Note that one operation can be done each cycle; i.e., the L1 can either receive data from the L1 or send data to the Load/Store buffers each cycle, but not both. Latency from L2$ to L3$ has also been increased from 4 cycles/line to 2 cycles/line. The bandwidth from the level 2 cache to the shared level 3 is 32 bytes per cycle. |
=== eDRAM architectural changes === | === eDRAM architectural changes === | ||
Line 598: | Line 589: | ||
The new eDRAM changes mean it's no longer architectural - capable of caching any data (including "unreachable memory", display engines, and effectively any memory transfer not bound by software restrictions) and is entirely invisible to software (one exception noted later) in terms of coherency (note that no flushing is thus necessary to maintain coherency), ordering, or other organizational details. For optimal graphics performance, the graphics driver may decide to limit certain memory accesses to only the eDRAM, only the LLC, or in both of them. | The new eDRAM changes mean it's no longer architectural - capable of caching any data (including "unreachable memory", display engines, and effectively any memory transfer not bound by software restrictions) and is entirely invisible to software (one exception noted later) in terms of coherency (note that no flushing is thus necessary to maintain coherency), ordering, or other organizational details. For optimal graphics performance, the graphics driver may decide to limit certain memory accesses to only the eDRAM, only the LLC, or in both of them. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Power == | == Power == | ||
Line 630: | Line 597: | ||
Speed Shift effectively eliminates the need for the OS to manages the P-states - though it does have the final say (unless special exceptions occur such as thermal throttling). Intel calls this "autonomous P-state", allowing Speed Shift to kick in in a matter of just ~1 millisecond (whereas the operating system-based p-states control can be as slow as 30 ms). Speed Shift effectively reduces hitting peak frequency in around ~30 ms from over 100 ms (OS-based implementation as before). While Speed Shift is capable of full range shift by default, the operating system can set the minimum QoS, maximum frequency and power/performance hints when desired. The final result should be higher performance and specially higher responsiveness at power constrained form factors. | Speed Shift effectively eliminates the need for the OS to manages the P-states - though it does have the final say (unless special exceptions occur such as thermal throttling). Intel calls this "autonomous P-state", allowing Speed Shift to kick in in a matter of just ~1 millisecond (whereas the operating system-based p-states control can be as slow as 30 ms). Speed Shift effectively reduces hitting peak frequency in around ~30 ms from over 100 ms (OS-based implementation as before). While Speed Shift is capable of full range shift by default, the operating system can set the minimum QoS, maximum frequency and power/performance hints when desired. The final result should be higher performance and specially higher responsiveness at power constrained form factors. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==== Power of System (Psys) ==== | ==== Power of System (Psys) ==== | ||
Line 648: | Line 609: | ||
** Idle power is reduced further | ** Idle power is reduced further | ||
** C1 state power reduction (improved dynamic capacitance C<sub>dyn</sub>) | ** C1 state power reduction (improved dynamic capacitance C<sub>dyn</sub>) | ||
− | |||
Overall Skylake enjoys better performance/Watt per core for 8x performance/watt over {{\\|Nehalem}}. | Overall Skylake enjoys better performance/Watt per core for 8x performance/watt over {{\\|Nehalem}}. | ||
− | |||
− | |||
− | |||
== Clock domains == | == Clock domains == | ||
Line 686: | Line 643: | ||
</table> | </table> | ||
− | Note that core ratio has been increased to a [theoretical] x83 multiplier and the coarse-grain ratio was dropped from Skylake allowing a BCLK ratio to have granularity of 1 MHz increments with BCLK frequency of over 200 readily achievable. The | + | Note that core ratio has been increased to a [theoretical] x83 multiplier and the coarse-grain ratio was dropped from Skylake allowing a BCLK ratio to have granularity of 1 MHz increments with BCLK frequency of over 200 readily achievable. The FIVER was removed and the voltage control was given back to the motherboard manufacturers; i.e., voltage supplies can be entirely motherboard-controlled. Skylake also bumped the DDR ratio up to 4133 MT/s. |
[[File:skylake bclk.png|left|300px]] | [[File:skylake bclk.png|left|300px]] | ||
Line 714: | Line 671: | ||
{{clear}} | {{clear}} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Die == | == Die == | ||
− | + | === Client Die === | |
Skylake desktop and mobile come and [[2 cores|2]] and [[4 cores|4]] cores. Each variant has its own die. One of the most noticeable changes on die is the amount of die space allocated to the [[GPU]]. The major components of the die is: | Skylake desktop and mobile come and [[2 cores|2]] and [[4 cores|4]] cores. Each variant has its own die. One of the most noticeable changes on die is the amount of die space allocated to the [[GPU]]. The major components of the die is: | ||
Line 807: | Line 681: | ||
* Memory Controller | * Memory Controller | ||
− | === System Agent === | + | ==== System Agent ==== |
The System Agent (SA) contains the Image Processing Unit (IPU), the Display Engine (DE), the I/O bus and various other shared functionality. Note that the mainstream desktop (i.e., [[quad-core]] die) does not have an IPU (The memory controller actually occupies a portion of where it would otherwise be). | The System Agent (SA) contains the Image Processing Unit (IPU), the Display Engine (DE), the I/O bus and various other shared functionality. Note that the mainstream desktop (i.e., [[quad-core]] die) does not have an IPU (The memory controller actually occupies a portion of where it would otherwise be). | ||
Line 824: | Line 698: | ||
{{clear}} | {{clear}} | ||
− | === Core === | + | ==== Core ==== |
Skylake Client models come in either 2x core or 4x core setup. | Skylake Client models come in either 2x core or 4x core setup. | ||
Line 834: | Line 708: | ||
: [[File:skylake core die (annotated).png|450px]] | : [[File:skylake core die (annotated).png|450px]] | ||
− | === Core Group === | + | ==== Core Group ==== |
− | Client models come in groups of 2 or 4 cores. (die sizes includes the | + | Client models come in groups of 2 or 4 cores. (die sizes includes the dark silicon space where the L3 ends). |
* 2-cores group: | * 2-cores group: | ||
− | + | * ~8.91 mm x ~2.845 mm | |
− | + | * ~25.347 mm² | |
: [[File:skylake 2x core complex die.png|500px]] | : [[File:skylake 2x core complex die.png|500px]] | ||
Line 845: | Line 719: | ||
* 4-core group | * 4-core group | ||
− | + | * ~8.844 mm x 5.694 mm | |
− | + | * ~50.354 mm² | |
: [[File:skylake 4x core complex die.png|500px]] | : [[File:skylake 4x core complex die.png|500px]] | ||
− | === Integrated Graphics === | + | |
+ | ==== Integrated Graphics ==== | ||
The [[integrated graphics]] takes up the largest portion of the die. The normal [[dual-core]] and [[quad-core]] dies come with 24 EU {{\\|Gen9.5}} GPU (with 12 units disabled on the low end models). | The [[integrated graphics]] takes up the largest portion of the die. The normal [[dual-core]] and [[quad-core]] dies come with 24 EU {{\\|Gen9.5}} GPU (with 12 units disabled on the low end models). | ||
Line 859: | Line 734: | ||
{{clear}} | {{clear}} | ||
− | === Dual-core === | + | ==== Dual-core ==== |
Die shot of the [[dual-core]] {{\\|Gen9|GT2}} Skylake processors. Those are found in mobile models, and entry-level/budget processors: | Die shot of the [[dual-core]] {{\\|Gen9|GT2}} Skylake processors. Those are found in mobile models, and entry-level/budget processors: | ||
Line 865: | Line 740: | ||
* 11 metal layers | * 11 metal layers | ||
* ~1,750,000,000 transistors | * ~1,750,000,000 transistors | ||
− | * ~9. | + | * ~9.57 mm x ~10.3 mm |
− | * ~ | + | * ~98.57 mm² die size |
* 2 CPU cores + 24 GPU EUs | * 2 CPU cores + 24 GPU EUs | ||
Line 874: | Line 749: | ||
: [[File:skylake (dual core) (annotated).png|650px]] | : [[File:skylake (dual core) (annotated).png|650px]] | ||
− | === Quad-core === | + | ==== Quad-core ==== |
− | Die shot of the [[quad-core]] {{\\|Gen9|GT2}} | + | Die shot of the [[quad-core]] {{\\|Gen9|GT2}} Skyllake processors. Those are found in almost all mainstream desktop processors. |
* [[14 nm process]] | * [[14 nm process]] | ||
* 11 metal layers | * 11 metal layers | ||
− | + | * ~122 mm² die size | |
− | * ~122 | ||
* 4 CPU cores + 24 GPU EUs | * 4 CPU cores + 24 GPU EUs | ||
− | : [[File:skylake (quad-core).png | + | : [[File:skylake (quad-core).png|650px]] |
: [[File:skylake (quad-core) (annotated).png|650px]] | : [[File:skylake (quad-core) (annotated).png|650px]] | ||
+ | |||
+ | === Server Die === | ||
+ | Skylake Server class models consist of 3 different dies: Low Core Count (LCC), Medium Core Count (MCC), and High Core Count (HCC). | ||
+ | |||
+ | ==== High Core Count (HCC) ==== | ||
+ | * [[14 nm process]] | ||
+ | * [[28 cores]] | ||
+ | : [[File:skylake-ep-hcc die shot.png|650px]] | ||
+ | |||
+ | == Added instructions == | ||
+ | '''{{x86|SGX}}''' - Software Guard Extensions | ||
+ | |||
+ | {| class="wikitable collapsible collapsed" | ||
+ | ! Full list | ||
+ | |- | ||
+ | | | ||
+ | {{collist | ||
+ | | count = 4 | ||
+ | | width = 650px | ||
+ | | | ||
+ | * {{x86|AEX}} | ||
+ | * {{x86|EACCEPT}} | ||
+ | * {{x86|EACCEPTCOPY}} | ||
+ | * {{x86|EADD}} | ||
+ | * {{x86|EAUG}} | ||
+ | * {{x86|EBLOCK}} | ||
+ | * {{x86|ECREATE}} | ||
+ | * {{x86|EDBGRD}} | ||
+ | * {{x86|EDBGWR}} | ||
+ | * {{x86|EENTER}} | ||
+ | * {{x86|EEXIT}} | ||
+ | * {{x86|EEXTEND}} | ||
+ | * {{x86|EGETKEY}} | ||
+ | * {{x86|EINIT}} | ||
+ | * {{x86|ELDB}} | ||
+ | * {{x86|ELDU}} | ||
+ | * {{x86|EMODPE}} | ||
+ | * {{x86|EMODPR}} | ||
+ | * {{x86|EMODT}} | ||
+ | * {{x86|EPA}} | ||
+ | * {{x86|EREMOVE}} | ||
+ | * {{x86|EREPORT}} | ||
+ | * {{x86|ERESUME}} | ||
+ | * {{x86|ETRACK}} | ||
+ | * {{x86|EWB}} | ||
+ | }} | ||
+ | |} | ||
+ | |||
+ | '''{{x86|MPX}}''' - Memory Protection Extensions | ||
+ | |||
+ | {| class="wikitable collapsible collapsed" | ||
+ | ! Full list | ||
+ | |- | ||
+ | | | ||
+ | {{collist | ||
+ | | count = 4 | ||
+ | | width = 650px | ||
+ | | | ||
+ | * {{x86|BNDCL}} | ||
+ | * {{x86|BNDCN}} | ||
+ | * {{x86|BNDCU}} | ||
+ | * {{x86|BNDLDX}} | ||
+ | * {{x86|BNDMK}} | ||
+ | * {{x86|BNDMOV}} | ||
+ | * {{x86|BNDSTX}} | ||
+ | }} | ||
+ | |} | ||
+ | |||
+ | '''{{x86|AVX-512}}''' - Advanced Vector Extensions 512; These instructions can only be found on selected high-end {{intel|Xeon}} models (codename '''SKX''') | ||
+ | |||
+ | {| class="wikitable collapsible collapsed" | ||
+ | ! Full list | ||
+ | |- | ||
+ | | | ||
+ | {{collist | ||
+ | | count = 5 | ||
+ | | width = 850px | ||
+ | | | ||
+ | * {{x86|VADDPD}} | ||
+ | * {{x86|VADDPS}} | ||
+ | * {{x86|VADDSD}} | ||
+ | * {{x86|VADDSS}} | ||
+ | * {{x86|VALIGND}} | ||
+ | * {{x86|VALIGNQ}} | ||
+ | * {{x86|VANDNPD}} | ||
+ | * {{x86|VANDNPS}} | ||
+ | * {{x86|VANDPD}} | ||
+ | * {{x86|VANDPS}} | ||
+ | * {{x86|VBLENDMPD}} | ||
+ | * {{x86|VBLENDMPS}} | ||
+ | * {{x86|VBROADCASTF32X2}} | ||
+ | * {{x86|VBROADCASTF32X4}} | ||
+ | * {{x86|VBROADCASTF32X8}} | ||
+ | * {{x86|VBROADCASTF64X2}} | ||
+ | * {{x86|VBROADCASTF64X4}} | ||
+ | * {{x86|VBROADCASTI32X2}} | ||
+ | * {{x86|VBROADCASTI32X4}} | ||
+ | * {{x86|VBROADCASTI32X8}} | ||
+ | * {{x86|VBROADCASTI64X2}} | ||
+ | * {{x86|VBROADCASTI64X4}} | ||
+ | * {{x86|VBROADCASTSD}} | ||
+ | * {{x86|VBROADCASTSS}} | ||
+ | * {{x86|VCMPPD}} | ||
+ | * {{x86|VCMPPS}} | ||
+ | * {{x86|VCMPSD}} | ||
+ | * {{x86|VCMPSS}} | ||
+ | * {{x86|VCOMISD}} | ||
+ | * {{x86|VCOMISS}} | ||
+ | * {{x86|VCOMPRESSPD}} | ||
+ | * {{x86|VCOMPRESSPS}} | ||
+ | * {{x86|VCVTDQ2PD}} | ||
+ | * {{x86|VCVTDQ2PS}} | ||
+ | * {{x86|VCVTPD2DQ}} | ||
+ | * {{x86|VCVTPD2PS}} | ||
+ | * {{x86|VCVTPD2QQ}} | ||
+ | * {{x86|VCVTPD2UDQ}} | ||
+ | * {{x86|VCVTPD2UQQ}} | ||
+ | * {{x86|VCVTPH2PS}} | ||
+ | * {{x86|VCVTPS2DQ}} | ||
+ | * {{x86|VCVTPS2PD}} | ||
+ | * {{x86|VCVTPS2PH}} | ||
+ | * {{x86|VCVTPS2QQ}} | ||
+ | * {{x86|VCVTPS2UDQ}} | ||
+ | * {{x86|VCVTPS2UQQ}} | ||
+ | * {{x86|VCVTQQ2PD}} | ||
+ | * {{x86|VCVTQQ2PS}} | ||
+ | * {{x86|VCVTSD2SI}} | ||
+ | * {{x86|VCVTSD2SS}} | ||
+ | * {{x86|VCVTSD2USI}} | ||
+ | * {{x86|VCVTSI2SD}} | ||
+ | * {{x86|VCVTSI2SS}} | ||
+ | * {{x86|VCVTSS2SD}} | ||
+ | * {{x86|VCVTSS2SI}} | ||
+ | * {{x86|VCVTSS2USI}} | ||
+ | * {{x86|VCVTTPD2DQ}} | ||
+ | * {{x86|VCVTTPD2QQ}} | ||
+ | * {{x86|VCVTTPD2UDQ}} | ||
+ | * {{x86|VCVTTPD2UQQ}} | ||
+ | * {{x86|VCVTTPS2DQ}} | ||
+ | * {{x86|VCVTTPS2QQ}} | ||
+ | * {{x86|VCVTTPS2UDQ}} | ||
+ | * {{x86|VCVTTPS2UQQ}} | ||
+ | * {{x86|VCVTTSD2SI}} | ||
+ | * {{x86|VCVTTSD2USI}} | ||
+ | * {{x86|VCVTTSS2SI}} | ||
+ | * {{x86|VCVTTSS2USI}} | ||
+ | * {{x86|VCVTUDQ2PD}} | ||
+ | * {{x86|VCVTUDQ2PS}} | ||
+ | * {{x86|VCVTUQQ2PD}} | ||
+ | * {{x86|VCVTUQQ2PS}} | ||
+ | * {{x86|VCVTUSI2SD}} | ||
+ | * {{x86|VCVTUSI2SS}} | ||
+ | * {{x86|VDBPSADBW}} | ||
+ | * {{x86|VDIVPD}} | ||
+ | * {{x86|VDIVPS}} | ||
+ | * {{x86|VDIVSD}} | ||
+ | * {{x86|VDIVSS}} | ||
+ | * {{x86|VEXP2PD}} | ||
+ | * {{x86|VEXP2PS}} | ||
+ | * {{x86|VEXPANDPD}} | ||
+ | * {{x86|VEXPANDPS}} | ||
+ | * {{x86|VEXTRACTF32X4}} | ||
+ | * {{x86|VEXTRACTF32X8}} | ||
+ | * {{x86|VEXTRACTF64X2}} | ||
+ | * {{x86|VEXTRACTF64X4}} | ||
+ | * {{x86|VEXTRACTI32X4}} | ||
+ | * {{x86|VEXTRACTI32X8}} | ||
+ | * {{x86|VEXTRACTI64X2}} | ||
+ | * {{x86|VEXTRACTI64X4}} | ||
+ | * {{x86|VEXTRACTPS}} | ||
+ | * {{x86|VFIXUPIMMPD}} | ||
+ | * {{x86|VFIXUPIMMPS}} | ||
+ | * {{x86|VFIXUPIMMSD}} | ||
+ | * {{x86|VFIXUPIMMSS}} | ||
+ | * {{x86|VFMADD132PD}} | ||
+ | * {{x86|VFMADD132PS}} | ||
+ | * {{x86|VFMADD132SD}} | ||
+ | * {{x86|VFMADD132SS}} | ||
+ | * {{x86|VFMADD213PD}} | ||
+ | * {{x86|VFMADD213PS}} | ||
+ | * {{x86|VFMADD213SD}} | ||
+ | * {{x86|VFMADD213SS}} | ||
+ | * {{x86|VFMADD231PD}} | ||
+ | * {{x86|VFMADD231PS}} | ||
+ | * {{x86|VFMADD231SD}} | ||
+ | * {{x86|VFMADD231SS}} | ||
+ | * {{x86|VFMADDSUB132PD}} | ||
+ | * {{x86|VFMADDSUB132PS}} | ||
+ | * {{x86|VFMADDSUB213PD}} | ||
+ | * {{x86|VFMADDSUB213PS}} | ||
+ | * {{x86|VFMADDSUB231PD}} | ||
+ | * {{x86|VFMADDSUB231PS}} | ||
+ | * {{x86|VFMSUB132PD}} | ||
+ | * {{x86|VFMSUB132PS}} | ||
+ | * {{x86|VFMSUB132SD}} | ||
+ | * {{x86|VFMSUB132SS}} | ||
+ | * {{x86|VFMSUB213PD}} | ||
+ | * {{x86|VFMSUB213PS}} | ||
+ | * {{x86|VFMSUB213SD}} | ||
+ | * {{x86|VFMSUB213SS}} | ||
+ | * {{x86|VFMSUB231PD}} | ||
+ | * {{x86|VFMSUB231PS}} | ||
+ | * {{x86|VFMSUB231SD}} | ||
+ | * {{x86|VFMSUB231SS}} | ||
+ | * {{x86|VFMSUBADD132PD}} | ||
+ | * {{x86|VFMSUBADD132PS}} | ||
+ | * {{x86|VFMSUBADD213PD}} | ||
+ | * {{x86|VFMSUBADD213PS}} | ||
+ | * {{x86|VFMSUBADD231PD}} | ||
+ | * {{x86|VFMSUBADD231PS}} | ||
+ | * {{x86|VFNMADD132PD}} | ||
+ | * {{x86|VFNMADD132PS}} | ||
+ | * {{x86|VFNMADD132SD}} | ||
+ | * {{x86|VFNMADD132SS}} | ||
+ | * {{x86|VFNMADD213PD}} | ||
+ | * {{x86|VFNMADD213PS}} | ||
+ | * {{x86|VFNMADD213SD}} | ||
+ | * {{x86|VFNMADD213SS}} | ||
+ | * {{x86|VFNMADD231PD}} | ||
+ | * {{x86|VFNMADD231PS}} | ||
+ | * {{x86|VFNMADD231SD}} | ||
+ | * {{x86|VFNMADD231SS}} | ||
+ | * {{x86|VFNMSUB132PD}} | ||
+ | * {{x86|VFNMSUB132PS}} | ||
+ | * {{x86|VFNMSUB132SD}} | ||
+ | * {{x86|VFNMSUB132SS}} | ||
+ | * {{x86|VFNMSUB213PD}} | ||
+ | * {{x86|VFNMSUB213PS}} | ||
+ | * {{x86|VFNMSUB213SD}} | ||
+ | * {{x86|VFNMSUB213SS}} | ||
+ | * {{x86|VFNMSUB231PD}} | ||
+ | * {{x86|VFNMSUB231PS}} | ||
+ | * {{x86|VFNMSUB231SD}} | ||
+ | * {{x86|VFNMSUB231SS}} | ||
+ | * {{x86|VFPCLASSPD}} | ||
+ | * {{x86|VFPCLASSPS}} | ||
+ | * {{x86|VFPCLASSSD}} | ||
+ | * {{x86|VFPCLASSSS}} | ||
+ | * {{x86|VGATHERDPD}} | ||
+ | * {{x86|VGATHERDPS}} | ||
+ | * {{x86|VGATHERPF0DPD}} | ||
+ | * {{x86|VGATHERPF0DPS}} | ||
+ | * {{x86|VGATHERPF0QPD}} | ||
+ | * {{x86|VGATHERPF0QPS}} | ||
+ | * {{x86|VGATHERPF1DPD}} | ||
+ | * {{x86|VGATHERPF1DPS}} | ||
+ | * {{x86|VGATHERPF1QPD}} | ||
+ | * {{x86|VGATHERPF1QPS}} | ||
+ | * {{x86|VGATHERQPD}} | ||
+ | * {{x86|VGATHERQPS}} | ||
+ | * {{x86|VGETEXPPD}} | ||
+ | * {{x86|VGETEXPPS}} | ||
+ | * {{x86|VGETEXPSD}} | ||
+ | * {{x86|VGETEXPSS}} | ||
+ | * {{x86|VGETMANTPD}} | ||
+ | * {{x86|VGETMANTPS}} | ||
+ | * {{x86|VGETMANTSD}} | ||
+ | * {{x86|VGETMANTSS}} | ||
+ | * {{x86|VINSERTF32X4}} | ||
+ | * {{x86|VINSERTF32X8}} | ||
+ | * {{x86|VINSERTF64X2}} | ||
+ | * {{x86|VINSERTF64X4}} | ||
+ | * {{x86|VINSERTI32X4}} | ||
+ | * {{x86|VINSERTI32X8}} | ||
+ | * {{x86|VINSERTI64X2}} | ||
+ | * {{x86|VINSERTI64X4}} | ||
+ | * {{x86|VINSERTPS}} | ||
+ | * {{x86|VMAXPD}} | ||
+ | * {{x86|VMAXPS}} | ||
+ | * {{x86|VMAXSD}} | ||
+ | * {{x86|VMAXSS}} | ||
+ | * {{x86|VMINPD}} | ||
+ | * {{x86|VMINPS}} | ||
+ | * {{x86|VMINSD}} | ||
+ | * {{x86|VMINSS}} | ||
+ | * {{x86|VMOVAPD}} | ||
+ | * {{x86|VMOVAPS}} | ||
+ | * {{x86|VMOVD}} | ||
+ | * {{x86|VMOVDDUP}} | ||
+ | * {{x86|VMOVDQA32}} | ||
+ | * {{x86|VMOVDQA64}} | ||
+ | * {{x86|VMOVDQU16}} | ||
+ | * {{x86|VMOVDQU32}} | ||
+ | * {{x86|VMOVDQU64}} | ||
+ | * {{x86|VMOVDQU8}} | ||
+ | * {{x86|VMOVHLPS}} | ||
+ | * {{x86|VMOVHPD}} | ||
+ | * {{x86|VMOVHPS}} | ||
+ | * {{x86|VMOVLHPS}} | ||
+ | * {{x86|VMOVLPD}} | ||
+ | * {{x86|VMOVLPS}} | ||
+ | * {{x86|VMOVNTDQ}} | ||
+ | * {{x86|VMOVNTDQA}} | ||
+ | * {{x86|VMOVNTPD}} | ||
+ | * {{x86|VMOVNTPS}} | ||
+ | * {{x86|VMOVQ}} | ||
+ | * {{x86|VMOVSD}} | ||
+ | * {{x86|VMOVSHDUP}} | ||
+ | * {{x86|VMOVSLDUP}} | ||
+ | * {{x86|VMOVSS}} | ||
+ | * {{x86|VMOVUPD}} | ||
+ | * {{x86|VMOVUPS}} | ||
+ | * {{x86|VMULPD}} | ||
+ | * {{x86|VMULPS}} | ||
+ | * {{x86|VMULSD}} | ||
+ | * {{x86|VMULSS}} | ||
+ | * {{x86|VORPD}} | ||
+ | * {{x86|VORPS}} | ||
+ | * {{x86|VPABSB}} | ||
+ | * {{x86|VPABSD}} | ||
+ | * {{x86|VPABSQ}} | ||
+ | * {{x86|VPABSW}} | ||
+ | * {{x86|VPACKSSDW}} | ||
+ | * {{x86|VPACKSSWB}} | ||
+ | * {{x86|VPACKUSDW}} | ||
+ | * {{x86|VPACKUSWB}} | ||
+ | * {{x86|VPADDB}} | ||
+ | * {{x86|VPADDD}} | ||
+ | * {{x86|VPADDQ}} | ||
+ | * {{x86|VPADDSB}} | ||
+ | * {{x86|VPADDSW}} | ||
+ | * {{x86|VPADDUSB}} | ||
+ | * {{x86|VPADDUSW}} | ||
+ | * {{x86|VPADDW}} | ||
+ | * {{x86|VPALIGNR}} | ||
+ | * {{x86|VPANDD}} | ||
+ | * {{x86|VPANDND}} | ||
+ | * {{x86|VPANDNQ}} | ||
+ | * {{x86|VPANDQ}} | ||
+ | * {{x86|VPAVGB}} | ||
+ | * {{x86|VPAVGW}} | ||
+ | * {{x86|VPBLENDMB}} | ||
+ | * {{x86|VPBLENDMD}} | ||
+ | * {{x86|VPBLENDMQ}} | ||
+ | * {{x86|VPBLENDMW}} | ||
+ | * {{x86|VPBROADCASTB}} | ||
+ | * {{x86|VPBROADCASTD}} | ||
+ | * {{x86|VPBROADCASTMB2Q}} | ||
+ | * {{x86|VPBROADCASTMW2D}} | ||
+ | * {{x86|VPBROADCASTQ}} | ||
+ | * {{x86|VPBROADCASTW}} | ||
+ | * {{x86|VPCMPB}} | ||
+ | * {{x86|VPCMPD}} | ||
+ | * {{x86|VPCMPEQB}} | ||
+ | * {{x86|VPCMPEQD}} | ||
+ | * {{x86|VPCMPEQQ}} | ||
+ | * {{x86|VPCMPEQW}} | ||
+ | * {{x86|VPCMPGTB}} | ||
+ | * {{x86|VPCMPGTD}} | ||
+ | * {{x86|VPCMPGTQ}} | ||
+ | * {{x86|VPCMPGTW}} | ||
+ | * {{x86|VPCMPQ}} | ||
+ | * {{x86|VPCMPUB}} | ||
+ | * {{x86|VPCMPUD}} | ||
+ | * {{x86|VPCMPUQ}} | ||
+ | * {{x86|VPCMPUW}} | ||
+ | * {{x86|VPCMPW}} | ||
+ | * {{x86|VPCOMPRESSD}} | ||
+ | * {{x86|VPCOMPRESSQ}} | ||
+ | * {{x86|VPCONFLICTD}} | ||
+ | * {{x86|VPCONFLICTQ}} | ||
+ | * {{x86|VPERMB}} | ||
+ | * {{x86|VPERMD}} | ||
+ | * {{x86|VPERMI2B}} | ||
+ | * {{x86|VPERMI2D}} | ||
+ | * {{x86|VPERMI2PD}} | ||
+ | * {{x86|VPERMI2PS}} | ||
+ | * {{x86|VPERMI2Q}} | ||
+ | * {{x86|VPERMI2W}} | ||
+ | * {{x86|VPERMILPD}} | ||
+ | * {{x86|VPERMILPS}} | ||
+ | * {{x86|VPERMPD}} | ||
+ | * {{x86|VPERMPS}} | ||
+ | * {{x86|VPERMQ}} | ||
+ | * {{x86|VPERMT2B}} | ||
+ | * {{x86|VPERMT2D}} | ||
+ | * {{x86|VPERMT2PD}} | ||
+ | * {{x86|VPERMT2PS}} | ||
+ | * {{x86|VPERMT2Q}} | ||
+ | * {{x86|VPERMT2W}} | ||
+ | * {{x86|VPERMW}} | ||
+ | * {{x86|VPEXPANDD}} | ||
+ | * {{x86|VPEXPANDQ}} | ||
+ | * {{x86|VPEXTRB}} | ||
+ | * {{x86|VPEXTRD}} | ||
+ | * {{x86|VPEXTRQ}} | ||
+ | * {{x86|VPEXTRW}} | ||
+ | * {{x86|VPGATHERDD}} | ||
+ | * {{x86|VPGATHERDQ}} | ||
+ | * {{x86|VPGATHERQD}} | ||
+ | * {{x86|VPGATHERQQ}} | ||
+ | * {{x86|VPINSRB}} | ||
+ | * {{x86|VPINSRD}} | ||
+ | * {{x86|VPINSRQ}} | ||
+ | * {{x86|VPINSRW}} | ||
+ | * {{x86|VPLZCNTD}} | ||
+ | * {{x86|VPLZCNTQ}} | ||
+ | * {{x86|VPMADD52HUQ}} | ||
+ | * {{x86|VPMADD52LUQ}} | ||
+ | * {{x86|VPMADDUBSW}} | ||
+ | * {{x86|VPMADDWD}} | ||
+ | * {{x86|VPMAXSB}} | ||
+ | * {{x86|VPMAXSD}} | ||
+ | * {{x86|VPMAXSQ}} | ||
+ | * {{x86|VPMAXSW}} | ||
+ | * {{x86|VPMAXUB}} | ||
+ | * {{x86|VPMAXUD}} | ||
+ | * {{x86|VPMAXUQ}} | ||
+ | * {{x86|VPMAXUW}} | ||
+ | * {{x86|VPMINSB}} | ||
+ | * {{x86|VPMINSD}} | ||
+ | * {{x86|VPMINSQ}} | ||
+ | * {{x86|VPMINSW}} | ||
+ | * {{x86|VPMINUB}} | ||
+ | * {{x86|VPMINUD}} | ||
+ | * {{x86|VPMINUQ}} | ||
+ | * {{x86|VPMINUW}} | ||
+ | * {{x86|VPMOVB2M}} | ||
+ | * {{x86|VPMOVD2M}} | ||
+ | * {{x86|VPMOVDB}} | ||
+ | * {{x86|VPMOVDW}} | ||
+ | * {{x86|VPMOVM2B}} | ||
+ | * {{x86|VPMOVM2D}} | ||
+ | * {{x86|VPMOVM2Q}} | ||
+ | * {{x86|VPMOVM2W}} | ||
+ | * {{x86|VPMOVQ2M}} | ||
+ | * {{x86|VPMOVQB}} | ||
+ | * {{x86|VPMOVQD}} | ||
+ | * {{x86|VPMOVQW}} | ||
+ | * {{x86|VPMOVSDB}} | ||
+ | * {{x86|VPMOVSDW}} | ||
+ | * {{x86|VPMOVSQB}} | ||
+ | * {{x86|VPMOVSQD}} | ||
+ | * {{x86|VPMOVSQW}} | ||
+ | * {{x86|VPMOVSWB}} | ||
+ | * {{x86|VPMOVSXBD}} | ||
+ | * {{x86|VPMOVSXBQ}} | ||
+ | * {{x86|VPMOVSXBW}} | ||
+ | * {{x86|VPMOVSXDQ}} | ||
+ | * {{x86|VPMOVSXWD}} | ||
+ | * {{x86|VPMOVSXWQ}} | ||
+ | * {{x86|VPMOVUSDB}} | ||
+ | * {{x86|VPMOVUSDW}} | ||
+ | * {{x86|VPMOVUSQB}} | ||
+ | * {{x86|VPMOVUSQD}} | ||
+ | * {{x86|VPMOVUSQW}} | ||
+ | * {{x86|VPMOVUSWB}} | ||
+ | * {{x86|VPMOVW2M}} | ||
+ | * {{x86|VPMOVWB}} | ||
+ | * {{x86|VPMOVZXBD}} | ||
+ | * {{x86|VPMOVZXBQ}} | ||
+ | * {{x86|VPMOVZXBW}} | ||
+ | * {{x86|VPMOVZXDQ}} | ||
+ | * {{x86|VPMOVZXWD}} | ||
+ | * {{x86|VPMOVZXWQ}} | ||
+ | * {{x86|VPMULDQ}} | ||
+ | * {{x86|VPMULHRSW}} | ||
+ | * {{x86|VPMULHUW}} | ||
+ | * {{x86|VPMULHW}} | ||
+ | * {{x86|VPMULLD}} | ||
+ | * {{x86|VPMULLQ}} | ||
+ | * {{x86|VPMULLW}} | ||
+ | * {{x86|VPMULTISHIFTQB}} | ||
+ | * {{x86|VPMULUDQ}} | ||
+ | * {{x86|VPORD}} | ||
+ | * {{x86|VPORQ}} | ||
+ | * {{x86|VPROLD}} | ||
+ | * {{x86|VPROLQ}} | ||
+ | * {{x86|VPROLVD}} | ||
+ | * {{x86|VPROLVQ}} | ||
+ | * {{x86|VPRORD}} | ||
+ | * {{x86|VPRORQ}} | ||
+ | * {{x86|VPRORVD}} | ||
+ | * {{x86|VPRORVQ}} | ||
+ | * {{x86|VPSADBW}} | ||
+ | * {{x86|VPSCATTERDD}} | ||
+ | * {{x86|VPSCATTERDQ}} | ||
+ | * {{x86|VPSCATTERQD}} | ||
+ | * {{x86|VPSCATTERQQ}} | ||
+ | * {{x86|VPSHUFB}} | ||
+ | * {{x86|VPSHUFD}} | ||
+ | * {{x86|VPSHUFHW}} | ||
+ | * {{x86|VPSHUFLW}} | ||
+ | * {{x86|VPSLLD}} | ||
+ | * {{x86|VPSLLDQ}} | ||
+ | * {{x86|VPSLLQ}} | ||
+ | * {{x86|VPSLLVD}} | ||
+ | * {{x86|VPSLLVQ}} | ||
+ | * {{x86|VPSLLVW}} | ||
+ | * {{x86|VPSLLW}} | ||
+ | * {{x86|VPSRAD}} | ||
+ | * {{x86|VPSRAQ}} | ||
+ | * {{x86|VPSRAVD}} | ||
+ | * {{x86|VPSRAVQ}} | ||
+ | * {{x86|VPSRAVW}} | ||
+ | * {{x86|VPSRAW}} | ||
+ | * {{x86|VPSRLD}} | ||
+ | * {{x86|VPSRLDQ}} | ||
+ | * {{x86|VPSRLQ}} | ||
+ | * {{x86|VPSRLVD}} | ||
+ | * {{x86|VPSRLVQ}} | ||
+ | * {{x86|VPSRLVW}} | ||
+ | * {{x86|VPSRLW}} | ||
+ | * {{x86|VPSUBB}} | ||
+ | * {{x86|VPSUBD}} | ||
+ | * {{x86|VPSUBQ}} | ||
+ | * {{x86|VPSUBSB}} | ||
+ | * {{x86|VPSUBSW}} | ||
+ | * {{x86|VPSUBUSB}} | ||
+ | * {{x86|VPSUBUSW}} | ||
+ | * {{x86|VPSUBW}} | ||
+ | * {{x86|VPTERNLOGD}} | ||
+ | * {{x86|VPTERNLOGQ}} | ||
+ | * {{x86|VPTESTMB}} | ||
+ | * {{x86|VPTESTMD}} | ||
+ | * {{x86|VPTESTMQ}} | ||
+ | * {{x86|VPTESTMW}} | ||
+ | * {{x86|VPTESTNMB}} | ||
+ | * {{x86|VPTESTNMD}} | ||
+ | * {{x86|VPTESTNMQ}} | ||
+ | * {{x86|VPTESTNMW}} | ||
+ | * {{x86|VPUNPCKHBW}} | ||
+ | * {{x86|VPUNPCKHDQ}} | ||
+ | * {{x86|VPUNPCKHQDQ}} | ||
+ | * {{x86|VPUNPCKHWD}} | ||
+ | * {{x86|VPUNPCKLBW}} | ||
+ | * {{x86|VPUNPCKLDQ}} | ||
+ | * {{x86|VPUNPCKLQDQ}} | ||
+ | * {{x86|VPUNPCKLWD}} | ||
+ | * {{x86|VPXORD}} | ||
+ | * {{x86|VPXORQ}} | ||
+ | * {{x86|VRANGEPD}} | ||
+ | * {{x86|VRANGEPS}} | ||
+ | * {{x86|VRANGESD}} | ||
+ | * {{x86|VRANGESS}} | ||
+ | * {{x86|VRCP14PD}} | ||
+ | * {{x86|VRCP14PS}} | ||
+ | * {{x86|VRCP14SD}} | ||
+ | * {{x86|VRCP14SS}} | ||
+ | * {{x86|VRCP28PD}} | ||
+ | * {{x86|VRCP28PS}} | ||
+ | * {{x86|VRCP28SD}} | ||
+ | * {{x86|VRCP28SS}} | ||
+ | * {{x86|VREDUCEPD}} | ||
+ | * {{x86|VREDUCEPS}} | ||
+ | * {{x86|VREDUCESD}} | ||
+ | * {{x86|VREDUCESS}} | ||
+ | * {{x86|VRNDSCALEPD}} | ||
+ | * {{x86|VRNDSCALEPS}} | ||
+ | * {{x86|VRNDSCALESD}} | ||
+ | * {{x86|VRNDSCALESS}} | ||
+ | * {{x86|VRSQRT14PD}} | ||
+ | * {{x86|VRSQRT14PS}} | ||
+ | * {{x86|VRSQRT14SD}} | ||
+ | * {{x86|VRSQRT14SS}} | ||
+ | * {{x86|VRSQRT28PD}} | ||
+ | * {{x86|VRSQRT28PS}} | ||
+ | * {{x86|VRSQRT28SD}} | ||
+ | * {{x86|VRSQRT28SS}} | ||
+ | * {{x86|VSCALEFPD}} | ||
+ | * {{x86|VSCALEFPS}} | ||
+ | * {{x86|VSCALEFSD}} | ||
+ | * {{x86|VSCALEFSS}} | ||
+ | * {{x86|VSCATTERDPD}} | ||
+ | * {{x86|VSCATTERDPS}} | ||
+ | * {{x86|VSCATTERPF0DPD}} | ||
+ | * {{x86|VSCATTERPF0DPS}} | ||
+ | * {{x86|VSCATTERPF0QPD}} | ||
+ | * {{x86|VSCATTERPF0QPS}} | ||
+ | * {{x86|VSCATTERPF1DPD}} | ||
+ | * {{x86|VSCATTERPF1DPS}} | ||
+ | * {{x86|VSCATTERPF1QPD}} | ||
+ | * {{x86|VSCATTERPF1QPS}} | ||
+ | * {{x86|VSCATTERQPD}} | ||
+ | * {{x86|VSCATTERQPS}} | ||
+ | * {{x86|VSHUFF32X4}} | ||
+ | * {{x86|VSHUFF64X2}} | ||
+ | * {{x86|VSHUFI32X4}} | ||
+ | * {{x86|VSHUFI64X2}} | ||
+ | * {{x86|VSHUFPD}} | ||
+ | * {{x86|VSHUFPS}} | ||
+ | * {{x86|VSQRTPD}} | ||
+ | * {{x86|VSQRTPS}} | ||
+ | * {{x86|VSQRTSD}} | ||
+ | * {{x86|VSQRTSS}} | ||
+ | * {{x86|VSUBPD}} | ||
+ | * {{x86|VSUBPS}} | ||
+ | * {{x86|VSUBSD}} | ||
+ | * {{x86|VSUBSS}} | ||
+ | * {{x86|VUCOMISD}} | ||
+ | * {{x86|VUCOMISS}} | ||
+ | * {{x86|VUNPCKHPD}} | ||
+ | * {{x86|VUNPCKHPS}} | ||
+ | * {{x86|VUNPCKLPD}} | ||
+ | * {{x86|VUNPCKLPS}} | ||
+ | * {{x86|VXORPD}} | ||
+ | * {{x86|VXORPS}} | ||
+ | }} | ||
+ | |} | ||
+ | |||
+ | == Cores == | ||
+ | {{empty section}} | ||
== All Skylake Chips == | == All Skylake Chips == | ||
Line 894: | Line 1,370: | ||
created and tagged accordingly. | created and tagged accordingly. | ||
− | Missing a chip? please dump its name here: | + | Missing a chip? please dump its name here: http://en.wikichip.org/wiki/WikiChip:wanted_chips |
--> | --> | ||
− | + | <table class="wikitable sortable"> | |
− | <table class=" | + | <tr><th colspan="12" style="background:#D6D6FF;">Skylake Chips</th></tr> |
− | <tr | + | <tr><th colspan="9">Main processor</th><th colspan="3">IGP</th></tr> |
− | <tr | + | <tr><th>Model</th><th>µarch</th><th>Platform</th><th>Core</th><th>Launched</th><th>SDP</th><th>TDP</th><th>Freq</th><th>Max Mem</th><th>Name</th><th>Freq</th><th>Max Freq</th></tr> |
− | + | {{#ask: [[Category:microprocessor models by intel]] [[instance of::microprocessor]] [[microarchitecture::Skylake]] | |
− | < | ||
− | {{#ask: [[Category:microprocessor models by intel]] [[instance of::microprocessor]] [[microarchitecture::Skylake | ||
|?full page name | |?full page name | ||
|?model number | |?model number | ||
+ | |?microarchitecture | ||
+ | |?platform | ||
+ | |?core name | ||
|?first launched | |?first launched | ||
− | |? | + | |?sdp |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|?tdp | |?tdp | ||
|?base frequency#GHz | |?base frequency#GHz | ||
− | + | |?max memory#GB | |
− | |||
− | |||
− | |||
− | |?max memory# | ||
|?integrated gpu | |?integrated gpu | ||
|?integrated gpu base frequency | |?integrated gpu base frequency | ||
|?integrated gpu max frequency | |?integrated gpu max frequency | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|format=template | |format=template | ||
− | |template=proc table | + | |template=proc table 2 |
|searchlabel= | |searchlabel= | ||
− | + | |userparam=13 | |
− | |||
− | |userparam= | ||
|mainlabel=- | |mainlabel=- | ||
− | |||
}} | }} | ||
− | {{ | + | <tr><th colspan="12">Count: {{#ask:[[Category:microprocessor models by intel]][[instance of::microprocessor]][[microarchitecture::Skylake]]|format=count}}</th></tr> |
</table> | </table> | ||
− | |||
== References == | == References == | ||
Line 960: | Line 1,417: | ||
* [[:File:Overclocking 6th Generation Intel® Core™ Processors.pdf|Overclocking 6th Generation Intel® Core™ Processors]] | * [[:File:Overclocking 6th Generation Intel® Core™ Processors.pdf|Overclocking 6th Generation Intel® Core™ Processors]] | ||
− | |||
− | |||
== See also == | == See also == | ||
* AMD {{amd|Zen}} | * AMD {{amd|Zen}} |
Facts about "Skylake (client) - Microarchitectures - Intel"
codename | Skylake (client) + |
core count | 2 + and 4 + |
designer | Intel + |
first launched | August 5, 2015 + |
full page name | intel/microarchitectures/skylake (client) + |
instance of | microarchitecture + |
instruction set architecture | x86-64 + |
manufacturer | Intel + |
microarchitecture type | CPU + |
name | Skylake (client) + |
pipeline stages (max) | 19 + |
pipeline stages (min) | 14 + |
process | 14 nm (0.014 μm, 1.4e-5 mm) + |