From WikiChip
Editing intel/microarchitectures/skylake (client)

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 17: Line 17:
 
|stages max=19
 
|stages max=19
 
|isa=x86-64
 
|isa=x86-64
 +
|extension=MOVBE
 
|extension 2=MMX
 
|extension 2=MMX
 
|extension 3=SSE
 
|extension 3=SSE
Line 39: Line 40:
 
|extension 22=TXT
 
|extension 22=TXT
 
|extension 23=TSX
 
|extension 23=TSX
 +
|extension 24=RDSEED
 
|extension 25=ADCX
 
|extension 25=ADCX
 +
|extension 26=PREFETCHW
 
|extension 27=CLFLUSHOPT
 
|extension 27=CLFLUSHOPT
 
|extension 28=XSAVE
 
|extension 28=XSAVE
 +
|extension 29=SGX
 +
|extension 30=MPX
 
|l1i=32 KiB
 
|l1i=32 KiB
 
|l1i per=core
 
|l1i per=core
Line 81: Line 86:
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
! Core !! Abbrev !! Platform !! Target
+
! Core !! Abbrev !! Target
 
|-
 
|-
| {{intel|Skylake Y|l=core}} || SKL-Y || || 2-in-1s detachable, tablets, and computer sticks
+
| {{intel|Skylake Y|l=core}} || SKL-Y || 2-in-1s detachable, tablets, and computer sticks
 
|-
 
|-
| {{intel|Skylake U|l=core}} || SKL-U || || Light notebooks, portable All-in-Ones (AiOs), Minis, and conference room
+
| {{intel|Skylake U|l=core}} || SKL-U || Light notebooks, portable All-in-Ones (AiOs), Minis, and conference room
 
|-
 
|-
| {{intel|Skylake H|l=core}} || SKL-H || || Ultimate mobile performance, mobile workstations
+
| {{intel|Skylake H|l=core}} || SKL-H || Ultimate mobile performance, mobile workstations
 
|-
 
|-
| {{intel|Skylake S|l=core}} || SKL-S || || Desktop performance to value, AiOs, and minis
+
| {{intel|Skylake S|l=core}} || SKL-S || Desktop performance to value, AiOs, and minis
 
|-
 
|-
| {{intel|Skylake DT|l=core}} || SKL-DT || {{intel|Greenlow|l=platform}} || Workstations & entry-level servers
+
| {{intel|Skylake DT|l=core}} || SKL-DT || Workstations & entry-level servers
 
|}
 
|}
  
Line 140: Line 145:
 
| rowspan="4" | [[Microsoft]] || rowspan="4" | Windows || style="background-color: #ffdad6;" | Windows Vista || No Support
 
| rowspan="4" | [[Microsoft]] || rowspan="4" | Windows || style="background-color: #ffdad6;" | Windows Vista || No Support
 
|-
 
|-
| style="background-color: #d6ffd8;" | Windows 7 || rowspan="2" | Support ends July 2018
+
| style="background-color: #d6ffd8;" | Windows 7 || rowspan="2" | Support ends July 2017
 
|-
 
|-
 
| style="background-color: #d6ffd8;" | Windows 8.1  
 
| style="background-color: #d6ffd8;" | Windows 8.1  
Line 193: Line 198:
 
**** {{intel|Skylake Y|l=core}} and Skylake U cores have chipset in the same package (simplified {{intel|on Package I/O|OPIO}})
 
**** {{intel|Skylake Y|l=core}} and Skylake U cores have chipset in the same package (simplified {{intel|on Package I/O|OPIO}})
 
**** Increase in transfer rate from 5.0 GT/s to  8.0 GT/s (~3.93GB/s up from 2GB/s) per lane
 
**** Increase in transfer rate from 5.0 GT/s to  8.0 GT/s (~3.93GB/s up from 2GB/s) per lane
**** Limits motherboard trace design to 7 inches max from the CPU to chipset (down from 8)
+
**** Limits motherboard trace design to 7 inches max from (down from 8) from the CPU to chipset
 
** PCIe & DMI upgraded to Gen3
 
** PCIe & DMI upgraded to Gen3
 
** More I/O (configurable as PCIe/SATA/USB3)
 
** More I/O (configurable as PCIe/SATA/USB3)
 
** Lower-power I/O (eMMC, UFS, SDXC)
 
** Lower-power I/O (eMMC, UFS, SDXC)
 
** CSI-2 for the integrated IPU (mobile SKUs)
 
** CSI-2 for the integrated IPU (mobile SKUs)
** Intel Sensor Solution Hub integration
+
** Intel Sensor Solution Hub integrationLarger Line Fill Buffer?
** Larger Line Fill Buffer?
 
  
 
* [[System Agent]]
 
* [[System Agent]]
Line 208: Line 212:
 
* Core
 
* Core
 
** Front End
 
** Front End
 +
*** Larger legacy pipeline delivery (5 µOPs, up from 4)
 +
**** Another simple decoder has been added.
 
*** Allocation Queue (IDQ)
 
*** Allocation Queue (IDQ)
**** Wider Allocation path (5-way, up from 4-way in broadwell)
 
 
**** Larger delivery (6 µOPs, up from 4)
 
**** Larger delivery (6 µOPs, up from 4)
 
**** 2.28x larger buffer (64/thread, up from 56)
 
**** 2.28x larger buffer (64/thread, up from 56)
Line 255: Line 260:
 
** Direct X 12, OpenCL 2.0, OpenGL 4.4
 
** Direct X 12, OpenCL 2.0, OpenGL 4.4
 
** Up to 24 EUs GT2 (same as {{\\|Haswell}}); 48 EUs for GT3, and up to 72 EUs on {{intel|Iris Pro Graphics}}
 
** Up to 24 EUs GT2 (same as {{\\|Haswell}}); 48 EUs for GT3, and up to 72 EUs on {{intel|Iris Pro Graphics}}
*** 384 GFLOPS @ 1 GHz (GT2)
+
*** 1,152 GFLOPS @ 1 GHz
  
 
==== CPU changes ====
 
==== CPU changes ====
* Like Haswell, most general purpose ALU operations execute at up to 4 ops/cycle for 8, 32 and 64-bit registers. (16-bit throughput varies per op, can be 4, 3.5 or 2 op/cycle).
+
* Most ALU operations have 4 op/cycle 1 for 8 and 32-bit registers. 64-bit ops are still limited to 3 op/cycle. (16-bit throughput varies per op, can be 4, 3.5 or 2 op/cycle).
* ADC and SBB are single uop (like Broadwell), down from 2 in Haswell. Throughput of 1 op/cycle, or 2/c if not bottlenecked by one long dependency, same as Haswell.
+
* MOVSX and MOVZX have 4 op/cycle throughput for 16->32 and 32->64 forms, in addition to Haswell's 8->32, 8->64 and 16->64 bit forms.
* Vector moves have throughput of 4 op/cycle (improved move elimination for nothing-but-move microbenchmarks)
+
* ADC and SBB have throughput of 1 op/cycle, same as Haswell.
* vPCMPGTx on the same register is recognized as a zeroing idiom (4 ops/cycle, no execution unit) like vpXORxx and vPSUBx zeroing.
+
* Vector moves have throughput of 4 op/cycle (move elimination).
* Vector ALU ops are often "standardized" to latency of 4. for example, vADDPS and vMULPS used to have L of 3 and 5 in HSW, or both 3 in BDW, now both are 4.
+
* Not only zeroing vector vpXORxx and vpSUBxx ops, but also vPCMPxxx on the same register, have throughput of 4 op/cycle.
* Fused multiply-add ops have latency of 4 and throughput of 0.5 op/cycle, improved from 5 cycle latency.
+
* Vector ALU ops are often "standardized" to latency of 4. for example, vADDPS and vMULPS used to have L of 3 and 5, now both are 4.
* Throughput of vADDps, vSUBps, vCMPps, vMAXps, their scalar and double analogs is increased to 2 op/cycle.  Lower latency SIMD FP-add unit on port 1 removed in favour of running all FP math on the FMA units.
+
* Fused multiply-add ops have latency of 4 and throughput of 0.5 op/cycle.
* Throughput of vPSLxx and vPSRxx with immediate (i.e. fixed vector shifts) is increased to 2 op/cycle, along with VPSxxVx variable shifts.
+
* Throughput of vADDps, vSUBps, vCMPps, vMAXps, their scalar and double analogs is increased to 2 op/cycle.
 +
* Throughput of vPSLxx and vPSRxx with immediate (i.e. fixed vector shifts) is increased to 2 op/cycle.
 
* Throughput of vANDps, vANDNps, vORps, vXORps, their scalar and double analogs, vPADDx, vPSUBx is increased to 3 op/cycle.
 
* Throughput of vANDps, vANDNps, vORps, vXORps, their scalar and double analogs, vPADDx, vPSUBx is increased to 3 op/cycle.
 
* vDIVPD, vSQRTPD have approximately twice as good throughput: from 8 to 4 and from 28 to 12 cycles/op.
 
* vDIVPD, vSQRTPD have approximately twice as good throughput: from 8 to 4 and from 28 to 12 cycles/op.
Line 361: Line 367:
 
**** fixed partition
 
**** fixed partition
 
*** 1G page translations:
 
*** 1G page translations:
**** 4 entries; 4-way set associative
+
**** 4 entries; fully associative
 
**** fixed partition
 
**** fixed partition
 
** STLB
 
** STLB
Line 397: Line 403:
 
The Skylake [[system on a chip]] consists of a five major components: CPU core, [[last level cache|LLC]], Ring interconnect, System agent, and the [[integrated graphics]]. The image shown on the right, presented by Intel at the Intel Developer Forum in 2015, represents a hypothetical model incorporating all available features Skylake has to offer (i.e. [[superset]] of features). Skylake features an improved core (see [[#Pipeline|§ Pipeline]]) with higher performance per watt and higher performance per clock. The number of cores depends on the model, but mainstream mobile models are typically [[dual-core]] while mainstream desktop models are typically [[quad-core]] with dual-core desktop models still offered for value models (e.g. {{intel|Celeron}}). Accompanying the cores is the LCC ([[last level cache]] or [[L3$]] as seen from the CPU perspective). On mainstream parts the LLC consists of 2 MiB for each core with lower amounts for value models. Connecting the cores together is the ring interconnect. The ring extends to the GPU and the system agent as well. Intel further optimized the ring in Skylake for low-power and higher bandwidth.
 
The Skylake [[system on a chip]] consists of a five major components: CPU core, [[last level cache|LLC]], Ring interconnect, System agent, and the [[integrated graphics]]. The image shown on the right, presented by Intel at the Intel Developer Forum in 2015, represents a hypothetical model incorporating all available features Skylake has to offer (i.e. [[superset]] of features). Skylake features an improved core (see [[#Pipeline|§ Pipeline]]) with higher performance per watt and higher performance per clock. The number of cores depends on the model, but mainstream mobile models are typically [[dual-core]] while mainstream desktop models are typically [[quad-core]] with dual-core desktop models still offered for value models (e.g. {{intel|Celeron}}). Accompanying the cores is the LCC ([[last level cache]] or [[L3$]] as seen from the CPU perspective). On mainstream parts the LLC consists of 2 MiB for each core with lower amounts for value models. Connecting the cores together is the ring interconnect. The ring extends to the GPU and the system agent as well. Intel further optimized the ring in Skylake for low-power and higher bandwidth.
  
Accompanying the cores is the {{\\|Gen9}} [[integrated graphics]] unit which comes in a number of different tiers ranging from just 12 execution units (used in the ultra-low power models) all the way the GT4 ({{\\|gen9#Scalability|Gen9 § Pipeline}}) with 72 execution units boasting a peak performance of up to 2,534.4 GFLOPS (HF) / 1,267.2 GFLOPS (SP) on the highest-end workstation model. The two highest-tier models are also accompanied by dedicated [[eDRAM]] ranging from 64 to 128 MiB in capacity. The eDRAM is packaged along with the SoC in the same package.
+
Accompanying the cores is the {{\\|Gen9}} [[integrated graphics]] unit which comes in a number of different tiers ranging from just 12 execution units (used in the ultra-low power models) all the way the GT4 ({{\\|gen9#Scalability|Gen9 § Pipeline}}) with 72 execution units boasting a peak performance of up to 2,534.4 GFLOPS (HF) / 1,267.2 GFLOPS (SP) on the highest-end workstation model. The two highest-tier models are also accompanied by dedicated [[eDRAM]] ranging from 64 GiB to 120 GiB in capacity. The eDRAM is packaged along with the SoC in the same package.
  
 
On the other side is the {{intel|System Agent}} (SA) which houses the various functionality that's not directly related to the cores or graphics. Skylake features an upgraded [[integrated memory controller]] (IMC) with most mainstream models supporting faster memory and dual-channel [[DDR4]]. The SA in Skylake also includes the [[Display Controller]] which now supports higher resolution displays with up to three displays for all mainstream models.
 
On the other side is the {{intel|System Agent}} (SA) which houses the various functionality that's not directly related to the cores or graphics. Skylake features an upgraded [[integrated memory controller]] (IMC) with most mainstream models supporting faster memory and dual-channel [[DDR4]]. The SA in Skylake also includes the [[Display Controller]] which now supports higher resolution displays with up to three displays for all mainstream models.
Line 438: Line 444:
  
 
==== Front-end ====
 
==== Front-end ====
The front-end is tasked with the challenge of fetching the complex [[x86]] instructions from memory, decoding them, and delivering them to the execution units. In other words, the front end needs to be able to consistently deliver enough [[µOPs]] from the instruction code stream to keep the back-end busy. When the back-end is not being fully utilized, the core is not reaching its full performance. A poorly or under-performing front-end will translate directly to a poorly performing core. This challenge is further complicated by various redirection such as branches and the complex nature of the [[x86]] instructions themselves.
+
The front-end is is tasked with the challenge of fetching the complex [[x86]] instructions from memory, decoding them, and delivering them to the execution units. In other words, the front end needs to be able to consistently deliver enough [[µOPs]] from the instruction code stream to keep the back-end busy. When the back-end is not being fully utilized, the core is not reaching its full performance. A poorly or under-performing front-end will translate directly to a poorly performing core. This challenge is further complicated by various redirection such as branches and the complex nature of the [[x86]] instructions themselves.
  
 
===== Fetch & pre-decoding =====  
 
===== Fetch & pre-decoding =====  
On their first pass, instructions should have already been prefetched from the [[L2 cache]] and into the [[L1 cache]]. The L1 is a 32 [[KiB]], 8-way set associative cache, identical in size and organization to {{intel|microarchitectures|previous generations}}. Skylake fetching is done on a 16-byte fetch window. A window size that has not changed in a number of generations. Up to 16 bytes of code can be fetched each cycle. Note that fetcher is shared evenly between the two threads so that each thread gets every other cycle. At this point they are still [[macro-ops]] (i.e. variable-length [[x86]] architectural instruction). Instructions are brought into the pre-decode buffer for initial preparation.
+
On their first pass, instructions should have already been prefetched from the [[L2 cache]] and into the [[L1 cache]]. The L1 is a 32 [[KiB]], 8-way set associative cache, identical in size and organization to {{intel|microarchitectures|previous generations}}. Skylake fetching is done on a 16-byte fetch window. A window size that has not changed in a number of generations. Up to 16 bytes of code can be fetched each cycle. Note that fetcher is shared evenly between two thread, so that each thread gets every other cycle. At this point they are still [[macro-ops]] (i.e. variable-length [[x86]] architectural instruction). Instructions are brought into the pre-decode buffer for initial preparation.
  
 
[[File:skylake fetch.svg|left|300px]]
 
[[File:skylake fetch.svg|left|300px]]
  
[[x86]] instructions are complex, variable length, have inconsistent encoding, and may contain multiple operations. At the pre-decode buffer, the instructions boundaries get detected and marked. This is a fairly difficult task because each instruction can vary from a single byte all the way up to fifteen. Moreover, determining the length requires inspecting a couple of bytes of the instruction. In addition to boundary marking, prefixes are also decoded and checked for various properties such as branches. As with previous microarchitectures, the pre-decoder has a [[throughput]] of 6 [[macro-ops]] per cycle or until all 16 bytes are consumed, whichever happens first. Note that the predecoder will not load a new 16-byte block until the previous block has been fully exhausted. For example, suppose a new chunk was loaded, resulting in 7 instructions. In the first cycle, 6 instructions will be processed and a whole second cycle will be wasted for that last instruction. This will produce the much lower throughput of 3.5 instructions per cycle which is considerably less than optimal. Likewise, if the 16-byte block resulted in just 4 instructions with 1 byte of the 5th instruction received, the first 4 instructions will be processed in the first cycle and a second cycle will be required for the last instruction. This will produce an average throughput of 2.5 instructions per cycle. Note that there is a special case for {{x86|length-changing prefix}} (LCPs) which will incur additional pre-decoding costs. Real code is often less than 4 bytes which usually results in a good rate.  
+
[[x86]] instructions are complex, variable length, have inconsistent encoding, and may contain multiple operations. At the pre-decode buffer the instructions boundaries get detected and marked. This is a fairly difficult task because each instruction can vary from a single byte all the way up to fifteen. Moreover, determining the length requires inspecting a couple of bytes of the instruction. In addition boundary marking, prefixes are also decoded and checked for various properties such as branches. As with previous microarchitectures, the pre-decoder has a [[throughput]] of 6 [[macro-ops]] per cycle or until all 16 bytes are consumed, whichever happens first. Note that the predecoder will not load a new 16-byte block until the previous block has been fully exhausted. For example, suppose a new chunk was loaded, resulting in 7 instructions. In the first cycle, 6 instructions will be processed and a whole second cycle will be wasted for that last instruction. This will produce the much lower throughput of 3.5 instructions per cycle which is considerably less than optimal. Likewise, if the 16-byte block resulted in just 4 instructions with 1 byte of the 5th instruction received, the first 4 instructions will be processed in the first cycle and a second cycle will be required for the last instruction. This will produce an average throughput of 2.5 instructions per cycle. Note that there is a special case for {{x86|length-changing prefix}} (LCPs) which will incur additional pre-decoding costs. Real code is often less than 4 bytes which usually results in a good rate.  
  
 
All of this works along with the branch prediction unit which attempts to guess the flow of instructions. In Skylake, the [[branch predictor]] has also been improved. The branch predictor now has reduced penalty (i.e. lower latency) for wrong direct jump target prediction. Additionally, the predictor in Skylake can inspect further in the byte stream than in previous architectures. The intimate improvements done in the branch predictor were not further disclosed by Intel.
 
All of this works along with the branch prediction unit which attempts to guess the flow of instructions. In Skylake, the [[branch predictor]] has also been improved. The branch predictor now has reduced penalty (i.e. lower latency) for wrong direct jump target prediction. Additionally, the predictor in Skylake can inspect further in the byte stream than in previous architectures. The intimate improvements done in the branch predictor were not further disclosed by Intel.
Line 459: Line 465:
 
|}
 
|}
 
{{see also|macro-operation fusion|l1=Macro-Operation Fusion}}
 
{{see also|macro-operation fusion|l1=Macro-Operation Fusion}}
The pre-decoded instructions are delivered to the Instruction Queue (IQ). In {{\\|Broadwell}}, the Instruction Queue has been increased to 25 entries duplicated over for each thread (i.e. 50 total entries). It's unclear if that has changed with Skylake. One key optimization the instruction queue does is [[macro-op fusion]]. Skylake can fuse two [[macro-ops]] into a single complex one in a number of cases. In cases where a {{x86|test}} or {{x86|compare}} instruction with a subsequent conditional jump is detected, it will be converted into a single compare-and-branch instruction. Those fused instructions remain fused throughout the entire pipeline and get executed as a single operation by the branch unit thereby saving bandwidth everywhere. Only one such fusion can be performed during each cycle.
+
The pre-decoded instructions are delivered to the Instruction Queue (IQ). In {{\\|Broadwell}}, the Instruction Queue has been increased to 25 entries duplicated over for each thread (i.e. 50 total entries). It's unclear if that has changed with Skylake. One key optimization the instruction queue does is [[macro-op fusion]]. Skylake can fuse two [[macro-ops]] into a single complex one in a number of cases. In cases where a {{x86|test}} or {{x86|compare}} instruction with a subsequent conditional jump is detected, it will be converted into a single compare-and-branch instruction. Those fused instructions remain fused throughout the entire pipeline and get executed as a single operation by the branch unit thereby saving bandwidth everywhere. Only one such fusion can be performed each cycle.
  
 
===== Decoding =====
 
===== Decoding =====
 
[[File:skylake decode.svg|right|425px]]
 
[[File:skylake decode.svg|right|425px]]
Up to four pre-decoded instructions are sent to the decoders each cycle. Like the fetchers, the Decoders alternate between the two thread each cycle. Decoders read in [[macro-operations]] and emit regular, fixed length [[µOPs]]. Skylake represents a big genealogical change from the last couple of microarchitectures. Skylake's pipeline is wider than it predecessors; Skylake adds another [[simple decoder]]. The five decoders are asymmetric; the first one, Decoder 0,  is a [[complex decoder]] while the other four are [[simple decoders]]. A simple decoder is capable of translating instructions that emit a single fused-[[µOP]]. By contrast, a [[complex decoder]] can decode anywhere from one to four fused-µOPs. Skylake is now capable of decoding 4 macro-ops per cycle, same as {{\\|Broadwell}}. Overall up to 4 simple instructions can be decoded each cycle with lesser amounts if the complex decoder needs to emit addition µOPs; i.e., for each additional µOP the complex decoder needs to emit, 1 less simple decoder can operate. In other words, for each additional µOP the complex decoder emits, one less decoder is active.
+
Up to five (3 + 2 fused or up to 4 unfused) pre-decoded instructions are sent to the decoders each cycle. Like the fetchers, the Decoders alternate between the two thread each cycle. Decoders read in [[macro-operations]] and emit regular, fixed length [[µOPs]]. Skylake represents a big genealogical change from the last couple of microarchitectures. Skylake's pipeline is wider than it predecessors; Skylake adds another [[simple decoder]]. The five decoders are asymmetric; the first one, Decoder 0,  is a [[complex decoder]] while the other four are [[simple decoders]]. A simple decoder is capable of translating instructions that emit a single fused-[[µOP]]. By contrast, a [[complex decoder]] can decode anywhere from one to four fused-µOPs. Skylake is now capable of decoding 5 macro-ops per cycle or 25% more than {{\\|Broadwell}}, however this does not translates directly to direct IPC uplift to due to various other more restricting points in the pipeline. Intel chose not increase the number of complex decoders because it's much harder to extract additional parallelism from the µOPs emitted by a complex instruction. Overall up to 5 simple instructions can be decoded each cycle with lesser amounts if the complex decoder needs to emit addition µOPs; i.e., for each additional µOP the complex decoder needs to emit, 1 less simple decoder can operate. In other words, for each additional µOP the complex decoder emits, one less decoder is active.
  
 
====== MSROM & Stack Engine ======
 
====== MSROM & Stack Engine ======
Line 492: Line 498:
  
 
===== Renaming & Allocation =====
 
===== Renaming & Allocation =====
Like the front-end, the [[Reorder Buffer]] has been increased to 224 entries, 32 entries more than {{\\|Broadwell}}. Since each ROB entry holds complete µOPs, in practice 224 entries might be equivalent to as much as 350 µOPs depending on the code being executed (e.g. fused load/stores). It is at this stage that [[architectural registers]] are mapped onto the underlying [[physical registers]]. Other additional bookkeeping tasks are also done at this point such as allocating resources for stores, loads, and determining all possible scheduler ports. Register renaming is also controlled by the [[Register Alias Table]] (RAT) which is used to mark where the data we depend on is coming from (after that value, too, came from an instruction that has previously been renamed). In {{intel|microarchitectures|previous microarchitectures}}, the RAT could handle 4 µOPs each cycle. Intel has not disclosed if that has changed in Skylake but it's possible. If unchanged, Skylake can rename any four registers per cycle. This includes the same register renamed four times in a single cycle. If the rename has not increased in Skylake, some aspects of improvements that were done in the prefetch/decode stages are effectively lost. Note that the ROB still operates on fused µOPs, therefore 4 µOPs can effectively be as high as 8 µOPs.
+
Like the front-end, the [[Reorder Buffer]] has been increased to 224 entries, 32 entries more than {{\\|Broadwell}}. It is at this stage that [[architectural registers]] are mapped onto the underlying [[physical registers]]. Other additional bookkeeping tasks are also done at this point such as allocating resources for stores, loads, and determining all possible scheduler ports. Register renaming is also controlled by the [[Register Alias Table]] (RAT) which is used to mark where the data we depend on is coming from (after that value, too, came from an instruction that has previously been renamed). In {{intel|microarchitectures|previous microarchitectures}}, the RAT could handle 4 µOPs each cycle. Intel has not disclosed if that has changed in Skylake but it's possible. If this has not change, Skylake can rename any four registers per cycle. This includes the same register renamed four times in a single cycle. If the rename has not increased in Skylake, some aspects of improvements that were done in the prefetch/decode stages are effectively lost. Note that the ROB still operates on fused µOPs, therefore 4 µOPs can effectively be as high as 8 µOPs.
  
 
It should be noted that there is no special costs involved in splitting up fused µOPs before execution or [[retirement]] and the two fused µOPs only occupy a single entry in the ROB.
 
It should be noted that there is no special costs involved in splitting up fused µOPs before execution or [[retirement]] and the two fused µOPs only occupy a single entry in the ROB.
Line 582: Line 588:
 
Skylake's memory subsystem is in charge of the loads and store requests and ordering. Since {{\\|Haswell}}, it's possible to sustain two memory reads (on ports 2 and 3) and one memory write (on port 4) each cycle. Each memory operation can be of any register size up to 256 bits. Skylake memory subsystem has been improved. The store buffer has been increased by 42 entries from {{\\|Broadwell}} to 56 for a total of 128 simultaneous memory operations in-flight or roughly 60% of all µOPs. Special care was taken to reduce the penalty for page-split loads; previously scenarios involving page-split loads were thought to be rarer than they actually are. This was addressed in Skylake with page-split loads are now made equal to other splits loads. Expect page split load penalty down to 5 cycles from 100 cycles in {{\\|Broadwell}}. The average latency to forward a load to store has also been improved and stores that miss in the L1$ generate L2$ requests to the next level cache much earlier in Skylake than before.
 
Skylake's memory subsystem is in charge of the loads and store requests and ordering. Since {{\\|Haswell}}, it's possible to sustain two memory reads (on ports 2 and 3) and one memory write (on port 4) each cycle. Each memory operation can be of any register size up to 256 bits. Skylake memory subsystem has been improved. The store buffer has been increased by 42 entries from {{\\|Broadwell}} to 56 for a total of 128 simultaneous memory operations in-flight or roughly 60% of all µOPs. Special care was taken to reduce the penalty for page-split loads; previously scenarios involving page-split loads were thought to be rarer than they actually are. This was addressed in Skylake with page-split loads are now made equal to other splits loads. Expect page split load penalty down to 5 cycles from 100 cycles in {{\\|Broadwell}}. The average latency to forward a load to store has also been improved and stores that miss in the L1$ generate L2$ requests to the next level cache much earlier in Skylake than before.
  
The L2 to L1 bandwidth in Skylake is the same as {{\\|Haswell}} at 64 bytes per cycle in either direction. Note that one operation can be done each cycle; i.e., the L1 can either receive data from the L2 or send data to the Load/Store buffers each cycle, but not both. Latency from L2$ to L3$ has also been decreased from 4 cycles/line to 2 cycles/line. The bandwidth from the level 2 cache to the shared level 3 is 32 bytes per cycle.
+
The L2 to L1 bandwidth in Skylake is the same as {{\\|Haswell}} at 64 bytes per cycle in either direction. Note that one operation can be done each cycle; i.e., the L1 can either receive data from the L1 or send data to the Load/Store buffers each cycle, but not both. Latency from L2$ to L3$ has also been decreased from 4 cycles/line to 2 cycles/line. The bandwidth from the level 2 cache to the shared level 3 is 32 bytes per cycle.
  
 
=== eDRAM architectural changes ===
 
=== eDRAM architectural changes ===
Line 602: Line 608:
  
 
Skylake features a highly-configurable design, using the same [[macro cells]], Intel can meet the different market segment requirements.
 
Skylake features a highly-configurable design, using the same [[macro cells]], Intel can meet the different market segment requirements.
The Skylake family consists out of  5 different actual dies, which can be further segmented by disabling different features, e.g. GT1 graphics are based on GT2 graphics with half the execution units disabled.
+
The Skylake family consists out of  5 different actual dies. Witch can be further segmented by disabling different features, e.g. GT1 graphics are based on GT2 graphics with half the execution units disabled.
 
   
 
   
 
<gallery widths=300px heights=150px caption="Physical Layout Breakdown" style="float:right">
 
<gallery widths=300px heights=150px caption="Physical Layout Breakdown" style="float:right">
Line 635: Line 641:
 
Prior to Skylake, SpeedStep had three major domains: [[Cores]], [[Integrated Graphics]], and Coherent Fabric. With Skylake, SpeedStep has been extended to a number of new domains, including the [[System Agent]], Memory, and the [[eDRAM]] I/O. Depending on the bandwidth consumption, SpeedStep can now save energy by reducing frequency on the new domains.
 
Prior to Skylake, SpeedStep had three major domains: [[Cores]], [[Integrated Graphics]], and Coherent Fabric. With Skylake, SpeedStep has been extended to a number of new domains, including the [[System Agent]], Memory, and the [[eDRAM]] I/O. Depending on the bandwidth consumption, SpeedStep can now save energy by reducing frequency on the new domains.
  
Information from the new domains, including additional thermal skín temperature control information is now supplied to OEMs.
+
Information from the new domains, including additional thermal skin temperature control information is now supplied to OEMs.
  
 
==== Power of System (Psys) ====
 
==== Power of System (Psys) ====
Line 686: Line 692:
 
</table>
 
</table>
  
Note that core ratio has been increased to a [theoretical] x83 multiplier and the coarse-grain ratio was dropped from Skylake allowing a BCLK ratio to have granularity of 1 MHz increments with BCLK frequency of over 200 readily achievable. The FIVR was removed and the voltage control was given back to the motherboard manufacturers; i.e., voltage supplies can be entirely motherboard-controlled. Skylake also bumped the DDR ratio up to 4133 MT/s.
+
Note that core ratio has been increased to a [theoretical] x83 multiplier and the coarse-grain ratio was dropped from Skylake allowing a BCLK ratio to have granularity of 1 MHz increments with BCLK frequency of over 200 readily achievable. The FIVER was removed and the voltage control was given back to the motherboard manufacturers; i.e., voltage supplies can be entirely motherboard-controlled. Skylake also bumped the DDR ratio up to 4133 MT/s.
  
 
[[File:skylake bclk.png|left|300px]]
 
[[File:skylake bclk.png|left|300px]]
Line 721: Line 727:
 
The IPU hardware supports:
 
The IPU hardware supports:
  
* Support for up to 4 cameras
+
* 13 [[megapixel|MP]] zero [[shutter lag]] 1080p60/2160p30 video capture and imaging and a large array of standardized image processing capabilities.  
** 13 [[megapixel|MP]] zero [[shutter lag]] 1080p60/2160p30 video capture and imaging and a large array of standardized image processing capabilities.  
 
 
* Face detection and recognition (smile/blink/group setting)
 
* Face detection and recognition (smile/blink/group setting)
 
* Full resolution still capture during video captures
 
* Full resolution still capture during video captures
Line 742: Line 747:
 
| Windows || Linux || Windows || Linux || [[High Level Shading Language|HLSL]] || Windows || Linux || Windows || Linux
 
| Windows || Linux || Windows || Linux || [[High Level Shading Language|HLSL]] || Windows || Linux || Windows || Linux
 
|-
 
|-
| {{intel|HD Graphics 510}} || 12 || GT1 || {{intel|Skylake U|U|l=core}}, {{intel|Skylake S|S|l=core}} ||  - || rowspan="10" colspan="2" style="text-align: center;" | '''1.0''' || rowspan="10" style="text-align: center;" | '''12''' || rowspan="10" style="text-align: center;" | '''N/A''' || rowspan="10" style="text-align: center;" | '''5.1''' || rowspan="10" style="text-align: center;" | '''4.5''' || rowspan="10" style="text-align: center;" | '''4.5''' || rowspan="10" style="text-align: center;" colspan="2" | '''2.0'''
+
| {{intel|HD Graphics 510}} || 12 || GT1 || {{intel|Skylake U|U|l=core}}, {{intel|Skylake S|S|l=core}} ||  - || rowspan="11" colspan="2" style="text-align: center;" | '''1.0''' || rowspan="11" style="text-align: center;" | '''12''' || rowspan="11" style="text-align: center;" | '''N/A''' || rowspan="11" style="text-align: center;" | '''5.1''' || rowspan="11" style="text-align: center;" | '''4.5''' || rowspan="11" style="text-align: center;" | '''4.5''' || rowspan="11" style="text-align: center;" colspan="2" | '''2.0'''
 
|-
 
|-
 
| {{intel|HD Graphics 515}} || 24 || GT2 || {{intel|Skylake Y|Y|l=core}} || -
 
| {{intel|HD Graphics 515}} || 24 || GT2 || {{intel|Skylake Y|Y|l=core}} || -
Line 777: Line 782:
 
| [[File:skylake u (back; standard).png|100px|link=intel/cores/skylake_u]] || {{intel|Skylake U|l=core}} || {{intel|BGA-1356}} || Yes || 1-chip
 
| [[File:skylake u (back; standard).png|100px|link=intel/cores/skylake_u]] || {{intel|Skylake U|l=core}} || {{intel|BGA-1356}} || Yes || 1-chip
 
|-
 
|-
| [[File:skylake h (back).png|100px|link=intel/cores/skylake_h]] || {{intel|Skylake H|l=core}} || {{intel|BGA-1440}} || Yes || 2-chip || rowspan="2" | {{intel|Sunrise Point}} || rowspan="3" | [[DMI 3.0]]
+
| [[File:skylake h (back).png|100px|link=intel/cores/skylake_h]] || {{intel|Skylake H|l=core}} || {{intel|BGA-1440}} || Yes || 2-chip || rowspan="2" | {{intel|Sunrise Point}} || rowspan="4" | [[DMI 3.0]]
 
|-
 
|-
 
| rowspan="2" | [[File:skylake s (back).png|100px|link=intel/cores/skylake_s]] || {{intel|Skylake S|l=core}} || {{intel|LGA-1151}} || No || 2-chip  
 
| rowspan="2" | [[File:skylake s (back).png|100px|link=intel/cores/skylake_s]] || {{intel|Skylake S|l=core}} || {{intel|LGA-1151}} || No || 2-chip  
Line 835: Line 840:
  
 
=== Core Group ===
 
=== Core Group ===
Client models come in groups of 2 or 4 cores. (die sizes includes the [[dark silicon]] space where the L3 ends).
+
Client models come in groups of 2 or 4 cores. (die sizes includes the dark silicon space where the L3 ends).
  
 
* 2-cores group:
 
* 2-cores group:
* ~25.347 mm² die area
+
* ~8.91 mm x ~2.845 mm
** ~8.91 mm x ~2.845 mm
+
* ~25.347 mm²
  
 
: [[File:skylake 2x core complex die.png|500px]]
 
: [[File:skylake 2x core complex die.png|500px]]
Line 845: Line 850:
  
 
* 4-core group
 
* 4-core group
* ~50.354 mm² die area
+
* ~8.844 mm x 5.694 mm
** ~8.844 mm x 5.694 mm
+
* ~50.354 mm²
  
 
: [[File:skylake 4x core complex die.png|500px]]
 
: [[File:skylake 4x core complex die.png|500px]]
 +
  
 
=== Integrated Graphics ===
 
=== Integrated Graphics ===
Line 865: Line 871:
 
* 11 metal layers
 
* 11 metal layers
 
* ~1,750,000,000 transistors
 
* ~1,750,000,000 transistors
* ~9.19 mm x ~11.08 mm
+
* ~9.57 mm x ~10.3 mm
* ~101.83 mm² die size
+
* ~98.57 mm² die size
 
* 2 CPU cores + 24 GPU EUs
 
* 2 CPU cores + 24 GPU EUs
  
Line 883: Line 889:
 
* 4 CPU cores + 24 GPU EUs
 
* 4 CPU cores + 24 GPU EUs
  
: [[File:skylake (quad-core).png|class=wikichip_ogimage|650px]]
+
: [[File:skylake (quad-core).png|650px]]
  
  

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)
codenameSkylake (client) +
core count2 + and 4 +
designerIntel +
first launchedAugust 5, 2015 +
full page nameintel/microarchitectures/skylake (client) +
instance ofmicroarchitecture +
instruction set architecturex86-64 +
manufacturerIntel +
microarchitecture typeCPU +
nameSkylake (client) +
pipeline stages (max)19 +
pipeline stages (min)14 +
process14 nm (0.014 μm, 1.4e-5 mm) +