From WikiChip
Editing amd/microarchitectures/zen
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.
Latest revision | Your text | ||
Line 10: | Line 10: | ||
|cores 2=6 | |cores 2=6 | ||
|cores 3=8 | |cores 3=8 | ||
− | |cores 4 | + | |cores 4=32 |
− | |||
− | |||
− | |||
|type=Superscalar | |type=Superscalar | ||
|oooe=Yes | |oooe=Yes | ||
Line 20: | Line 17: | ||
|stages=19 | |stages=19 | ||
|decode=4-way | |decode=4-way | ||
− | |isa=x86-64 | + | |isa=x86-16 |
+ | |isa 2=x86-32 | ||
+ | |isa 3=x86-64 | ||
|extension=MOVBE | |extension=MOVBE | ||
|extension 2=MMX | |extension 2=MMX | ||
Line 57: | Line 56: | ||
|l3 per=core | |l3 per=core | ||
|l3 desc=16-way set associative | |l3 desc=16-way set associative | ||
− | |core name= | + | |core name=Raven Ridge |
− | |core name 2 | + | |core name 2=Summit Ridge |
− | + | |core name 3=Snowy Owl | |
− | |core name | + | |core name 4=Naples |
− | |||
− | |core name | ||
− | |||
|predecessor=Excavator | |predecessor=Excavator | ||
|predecessor link=amd/microarchitectures/excavator | |predecessor link=amd/microarchitectures/excavator | ||
|predecessor 2=Puma | |predecessor 2=Puma | ||
|predecessor 2 link=amd/microarchitectures/puma | |predecessor 2 link=amd/microarchitectures/puma | ||
− | |successor=Zen | + | |successor=Zen 2 |
− | |successor link=amd/microarchitectures/zen | + | |successor link=amd/microarchitectures/zen 2 |
|pipeline=Yes | |pipeline=Yes | ||
|issues=4 | |issues=4 | ||
+ | |inst=Yes | ||
+ | |cache=Yes | ||
|core names=Yes | |core names=Yes | ||
+ | |succession=Yes | ||
}} | }} | ||
− | '''Zen''' ('''family 17h''') is the [[microarchitecture]] developed by [[AMD]] as a successor to both {{\\|Excavator}} and {{\\|Puma}}. Zen is an entirely new design, built from the ground up for optimal balance of performance and power capable of covering the entire computing spectrum from fanless notebooks to high-performance desktop computers. Zen was officially launched on March 2, [[2017]]. Zen | + | '''Zen''' ('''family 17h''') is the [[microarchitecture]] developed by [[AMD]] as a successor to both {{\\|Excavator}} and {{\\|Puma}}. Zen is an entirely new design, built from the ground up for optimal balance of performance and power capable of covering the entire computing spectrum from fanless notebooks to high-performance desktop computers. Zen was officially launched on March 2, [[2017]]. Zen is set to be eventually replaced by {{\\|Zen 2}}. |
− | For performance desktop and mobile computing, Zen is branded as | + | For performance desktop and mobile computing, Zen is branded as {{amd|Ryzen 3}}, {{amd|Ryzen 5}}, and {{amd|Ryzen 7}} processors. For servers, Zen is branded as {{amd|EPYC}}. |
== Etymology == | == Etymology == | ||
Line 83: | Line 82: | ||
== Codenames == | == Codenames == | ||
+ | {{future information}} | ||
+ | |||
{| class="wikitable" | {| class="wikitable" | ||
|- | |- | ||
Line 89: | Line 90: | ||
| {{amd|Naples|l=core}} || Up to 32/64 || High-end server [[multiprocessors]] | | {{amd|Naples|l=core}} || Up to 32/64 || High-end server [[multiprocessors]] | ||
|- | |- | ||
− | | {{amd| | + | | {{amd|Snowy Owl|l=core}} || 16/32 || Mid-range server processors |
− | |||
− | |||
− | |||
− | |||
|- | |- | ||
− | | | + | | {{amd|Summit Ridge|l=core}} || Up to 8/16 || Mainstream to high-end desktops & enthusiasts market processors |
|- | |- | ||
− | | {{amd| | + | | {{amd|Raven Ridge|l=core}} || 4/8 || Mainstream desktop & mobile processors with GPU |
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
Line 115: | Line 108: | ||
! Cores !! Unlocked !! {{x86|AVX2}} !! [[SMT]] !! {{amd|XFR}} !! [[IGP]] !! [[ECC]] !! [[Multiprocessing|MP]] | ! Cores !! Unlocked !! {{x86|AVX2}} !! [[SMT]] !! {{amd|XFR}} !! [[IGP]] !! [[ECC]] !! [[Multiprocessing|MP]] | ||
|- | |- | ||
− | + | | [[File:amd ryzen 3 logo.png|75px|link=Ryzen 3]] || {{amd|Ryzen 3}} || Low-end Performance || [[quad-core|Quad]] || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #ffdad6;" | ✘ || style="background-color: #f9dcb3;" | ✔/✘ || style="background-color: #ffdad6;" | ✘ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #ffdad6;" | ✘ | |
|- | |- | ||
− | | [[File:amd ryzen | + | | rowspan="2" | [[File:amd ryzen 5 logo.png|75px|link=Ryzen 5]] || rowspan="2" | {{amd|Ryzen 5}} || rowspan="2" | Mid-range Performance || [[quad-core|Quad]] || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #f9dcb3;" | ✔/✘ || style="background-color: #ffdad6;" | ✘ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #ffdad6;" | ✘ |
|- | |- | ||
− | + | | [[hexa-core|Hexa]] || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #f9dcb3;" | ✔/✘ || style="background-color: #ffdad6;" | ✘ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #ffdad6;" | ✘ | |
|- | |- | ||
− | | [[ | + | | [[File:amd ryzen 7 logo.png|75px|link=Ryzen 7]] || {{amd|Ryzen 7}} || High-end Performance / Enthusiasts || [[octa-core|Octa]] || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #f9dcb3;" | ✔/✘ || style="background-color: #ffdad6;" | ✘ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #ffdad6;" | ✘ |
|- | |- | ||
− | | [[File:amd | + | | [[File:amd epyc logo.png|75px|link=EPYC]] || {{amd|EPYC}} || High-performance Server Processor || [[24 cores|24]]-[[32 cores|32]] || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ || || style="background-color: #ffdad6;" | ✘ || style="background-color: #d6ffd8;" | ✔ || style="background-color: #d6ffd8;" | ✔ |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |- | ||
− | |||
|} | |} | ||
* '''Note:''' While a model has an unlocked multiplier, not all chipsets support overclocking. (see [[#Sockets/Platform|§Sockets]]) | * '''Note:''' While a model has an unlocked multiplier, not all chipsets support overclocking. (see [[#Sockets/Platform|§Sockets]]) | ||
− | * '''Note:''' 'X' models will enjoy "Full XFR" providing an additional +100 MHz (200 for {{amd|1500X}} | + | * '''Note:''' 'X' models will enjoy "Full XFR" providing an additional +100 MHz (200 for {{amd|1500X}}) when sufficient thermo/electric requirements are met. Non-X models are limited to just +50 MHz. |
=== Identification === | === Identification === | ||
Line 156: | Line 133: | ||
| ex 7 = X | | ex 7 = X | ||
| ex 2 1 = Ryzen | | ex 2 1 = Ryzen | ||
− | | ex 2 2 = | + | | ex 2 2 = 3 |
| ex 2 3 = | | ex 2 3 = | ||
− | | ex 2 4 = | + | | ex 2 4 = 1 |
− | | ex 2 5 = | + | | ex 2 5 = 2 |
− | + | | ex 2 6 = 00 | |
− | | ex 2 | + | | ex 2 7 = M |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | | ex | ||
| desc 1 = '''Brand Name'''<br><table><tr><td style="width: 50px;">'''{{amd|Ryzen}}'''</td><td></td></tr></table> | | desc 1 = '''Brand Name'''<br><table><tr><td style="width: 50px;">'''{{amd|Ryzen}}'''</td><td></td></tr></table> | ||
− | | desc 2 = '''Market segment'''<br><table><tr><td style="width: 50px;">'''3'''</td><td>Low-end performance</td></tr><tr><td>'''5'''</td><td>Mid-range performance</td></tr><tr><td>'''7'''</td><td>Enthusiast / High-end performance | + | | desc 2 = '''Market segment'''<br><table><tr><td style="width: 50px;">'''3'''</td><td>Low-end performance</td></tr><tr><td>'''5'''</td><td>Mid-range performance</td></tr><tr><td>'''7'''</td><td>Enthusiast / High-end performance</td></tr></table> |
| desc 3 = | | desc 3 = | ||
− | | desc 4 = '''Generation'''<br><table><tr><td style="width: 50px;">'''1'''</td><td>First generation Zen (2017 | + | | desc 4 = '''Generation'''<br><table><tr><td style="width: 50px;">'''1'''</td><td>First generation Zen (2017)</td></tr></table> |
− | | desc 5 = '''Performance Level'''<br><table><tr><td style="width: 50px;" | + | | desc 5 = '''Performance Level'''<br><table><tr><td style="width: 50px;">'''8'''</td><td>Highest</td></tr><tr><td>'''6-7'''</td><td>High</td></tr><tr><td>'''4-5'''</td><td>Mid</td></tr><tr><td>'''1-3'''</td><td>Low</td></tr></table> |
− | | desc 6 = '''Model Number'''<br> | + | | desc 6 = '''Model Number'''<br>Reserved for future speed bump/differentiator. Currently all models are "00". |
− | | desc 7 = '''Power Segment'''<br><table><tr><td style="width: 50px;">'''(none)'''</td><td>Standard Desktop</td></tr><tr><td>'''U'''</td><td>Standard Mobile</td></tr><tr><td>'''X'''</td><td>High Performance, with XFR | + | | desc 7 = '''Power Segment'''<br><table><tr><td style="width: 50px;">'''(none)'''</td><td>Standard Desktop</td></tr><tr><td>'''U'''</td><td>Standard Mobile</td></tr><tr><td>'''X'''</td><td>High Performance, with XFR</td></tr><tr><td>'''G'''</td><td>Desktop + [[IGP]]</td></tr><tr><td>'''T'''</td><td>Low-power Desktop</td></tr><tr><td>'''S'''</td><td>Low-power Desktop + [[IGP]]</td></tr><tr><td>'''M'''</td><td>Low-power Mobile</td></tr><tr><td>'''H'''</td><td>High-performance Mobile</td></tr></table> |
}} | }} | ||
== Release Dates == | == Release Dates == | ||
− | [[File:ryzen threadripper.png|right|thumb|First 16- | + | [[File:ryzen threadripper.png|right|thumb|First 16-code HEDT market CPU]] |
− | The first set of processors, as part of the {{amd|Ryzen 7}} family were introduced at an AMD event on February 22, 2017 before the Game Developer Conference (GDC). However initial models don't get shipped until March 2. {{amd|Ryzen 5}} [[hexa-core]] and [[quad-core]] variants were released on April 11, 2017. Server processors are set to be released in by the end of Q2, 2017. | + | The first set of processors, as part of the {{amd|Ryzen 7}} family were introduced at an AMD event on February 22, 2017 before the Game Developer Conference (GDC). However initial models don't get shipped until March 2. {{amd|Ryzen 5}} [[hexa-core]] and [[quad-core]] variants were released on April 11, 2017. Server processors are set to be released in by the end of Q2, 2017. Mobile processors are expected to be released by the end of 2017. |
− | |||
− | |||
− | |||
− | |||
− | |||
== Process Technology == | == Process Technology == | ||
{{see also|14 nm process}} | {{see also|14 nm process}} | ||
− | Zen is manufactured on [[Global Foundries]]' [[14 nm process]] | + | Zen is planned to be manufactured on [[Global Foundries]]' [[14 nm process]], same one used by [[IBM]] for their {{ibm|POWER9|l=arch}}. AMD's previous microarchitectures were based on [[32 nm|32]] and [[28 nm|28]] nanometer processes. The jump to 14 nm is part of AMD's attempt to remain competitive against Intel (Both {{intel|SkyLake}} and {{intel|Kaby Lake}} are also manufactured on 14 nm although by late 2017 Intel plans on moving on to {{intel|Cannonlake}} and [[10 nm process]]). The move to 14 nm will bring along related benefits of a smaller node such as reduced heat, reduced power consumption, and higher density for identical designs. |
== Compatibility == | == Compatibility == | ||
− | [[Linux]] added initial support for Zen starting with Linux Kernel 4.10. | + | [[Linux]] added initial support for Zen starting with Linux Kernel 4.10. Microsoft will only support Windows 10 for Zen. |
{| class="wikitable" | {| class="wikitable" | ||
! Vendor !! OS !! Version !! Notes | ! Vendor !! OS !! Version !! Notes | ||
|- | |- | ||
− | | rowspan="3" | | + | | rowspan="3" | Microsoft || rowspan="3" | Windows || style="background-color: #ffdad6;" | Windows 7 || No Support |
|- | |- | ||
| style="background-color: #ffdad6;" | Windows 8 || No Support | | style="background-color: #ffdad6;" | Windows 8 || No Support | ||
Line 203: | Line 168: | ||
| style="background-color: #d6ffd8;" | Windows 10 || Support | | style="background-color: #d6ffd8;" | Windows 10 || Support | ||
|- | |- | ||
− | + | | Linux || Linux || style="background-color: #d6ffd8;" | Kernel 4.10 || Initial Support | |
− | |||
− | |||
|} | |} | ||
Line 214: | Line 177: | ||
! Compiler !! Arch-Specific || Arch-Favorable | ! Compiler !! Arch-Specific || Arch-Favorable | ||
|- | |- | ||
− | | [[AOCC]] || <code> | + | | [[AOCC]] || <code>‐march=znver1</code> || <code>-mtune=znver1</code> |
|- | |- | ||
| [[GCC]] || <code>-march=znver1</code> || <code>-mtune=znver1</code> | | [[GCC]] || <code>-march=znver1</code> || <code>-mtune=znver1</code> | ||
Line 221: | Line 184: | ||
|- | |- | ||
| [[Visual Studio]] || <code>/arch:AVX2</code> || ? | | [[Visual Studio]] || <code>/arch:AVX2</code> || ? | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
|} | |} | ||
Line 242: | Line 190: | ||
=== Key changes from {{\\|Excavator}} === | === Key changes from {{\\|Excavator}} === | ||
− | * Zen was designed to succeed | + | * Zen was designed to succeed BOTH {{\\|Excavator}} (High-performance) and {{\\|Puma}} (Low-power) covering the entire range in one architecture |
** Cover the entire spectrum from fanless notebooks to high-performance desktops | ** Cover the entire spectrum from fanless notebooks to high-performance desktops | ||
** More aggressive clock gating with multi-level regions | ** More aggressive clock gating with multi-level regions | ||
Line 248: | Line 196: | ||
*** >15% switching capacitance (C<sub>AC</sub>) improvement | *** >15% switching capacitance (C<sub>AC</sub>) improvement | ||
* Utilizes [[14 nm process]] (from [[28 nm]]) | * Utilizes [[14 nm process]] (from [[28 nm]]) | ||
− | * 52% improvement in IPC per core for a single-thread ( | + | * 52% improvement in IPC per core for a single-thread (From Excavator) |
− | + | ** Based on the industry-standardized SPEC CPU2006 test suit | |
− | ** Based on the industry-standardized | + | * Up to 3.7x performance/watt improvment |
− | * Up to 3. | ||
* Return to conventional high-performance x86 design | * Return to conventional high-performance x86 design | ||
** Traditional design for cores without shared blocks (e.g. shared SIMD units) | ** Traditional design for cores without shared blocks (e.g. shared SIMD units) | ||
** Large beefier core design | ** Large beefier core design | ||
* Core engine | * Core engine | ||
− | ** Simultaneous Multithreading (SMT) support, 2 threads/core (see [[# | + | ** Simultaneous Multithreading (SMT) support, 2 threads/core (see [[#Simultaneous_MultiThreading_.28SMT.29|§ Simultaneous MultiThreading]] for details) |
** Branch Predictor | ** Branch Predictor | ||
*** Improved branch mispredictions | *** Improved branch mispredictions | ||
Line 262: | Line 209: | ||
**** Lower miss latency penalty | **** Lower miss latency penalty | ||
*** BP is now decoupled from fetch stage | *** BP is now decoupled from fetch stage | ||
− | ** Large | + | ** Large Op cache (2K instructions) |
** Wider μop dispatch (6, up from 4) | ** Wider μop dispatch (6, up from 4) | ||
** Larger instruction scheduler | ** Larger instruction scheduler | ||
Line 279: | Line 226: | ||
*** 64 KiB (double from previous capacity of 32 KiB) | *** 64 KiB (double from previous capacity of 32 KiB) | ||
*** Write-back L1 cache eviction policy (From write-through) | *** Write-back L1 cache eviction policy (From write-through) | ||
− | *** | + | *** 2x the bandwidth |
** L2 | ** L2 | ||
− | *** | + | *** 2x the bandwidth |
*** Faster L2 cache | *** Faster L2 cache | ||
** Faster L3 cache | ** Faster L3 cache | ||
+ | ** Large Op cache | ||
** Better L1$ and L2$ data prefetcher | ** Better L1$ and L2$ data prefetcher | ||
− | ** | + | ** 5x L3 bandwidth |
** Move elimination block added | ** Move elimination block added | ||
** Page Table Entry (PTE) Coalescing | ** Page Table Entry (PTE) Coalescing | ||
Line 301: | Line 249: | ||
While not new, Zen also supports {{x86|AVX}}, {{x86|AVX2}}, {{x86|FMA3}}, {{x86|BMI1}}, {{x86|BMI2}}, {{x86|AES}}, {{x86|RdRand}}, {{x86|SMEP}}. Note that with Zen, AMD dropped support for {{x86|XOP}}, {{x86|TBM}}, and {{x86|LWP}}. | While not new, Zen also supports {{x86|AVX}}, {{x86|AVX2}}, {{x86|FMA3}}, {{x86|BMI1}}, {{x86|BMI2}}, {{x86|AES}}, {{x86|RdRand}}, {{x86|SMEP}}. Note that with Zen, AMD dropped support for {{x86|XOP}}, {{x86|TBM}}, and {{x86|LWP}}. | ||
− | '''Note:''' WikiChip's testing shows {{x86|FMA4}} still works despite not being officially supported and not even reported by [[CPUID]]. This has also been confirmed by [http://agner.org/optimize/blog/read.php?i=838 Agner here] | + | '''Note:''' WikiChip's testing shows {{x86|FMA4}} still works despite not being officially supported and not even reported by [[CPUID]]. This has also been confirmed by [http://agner.org/optimize/blog/read.php?i=838 Agner here]. |
=== Block Diagram === | === Block Diagram === | ||
==== Client Configuration ==== | ==== Client Configuration ==== | ||
===== Entire SoC Overview ===== | ===== Entire SoC Overview ===== | ||
− | [[File:zen soc block.svg| | + | [[File:zen soc block.svg|900px]] |
===== Individual Core ===== | ===== Individual Core ===== | ||
[[File:zen block diagram.svg]] | [[File:zen block diagram.svg]] | ||
+ | ===== {{amd|Summit Ridge|l=core}} ===== | ||
+ | [[File:AMD Summit Ridge SoC.svg]] | ||
− | ==== | + | ==== Server Configuration ==== |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<div style="display: inline-block"> | <div style="display: inline-block"> | ||
− | <div style="float: left; margin: 15px;">'''32-core configuration:'''<br>[[File:zen soc block (32 cores).svg| | + | <div style="float: left; margin: 15px;">'''32-core configuration:'''<br>[[File:zen soc block (32 cores).svg|450px]]</div> |
− | <div style="float: left; margin: 15px;">'''24-core configuration:'''<br>[[File:zen soc block (24 cores).svg| | + | <div style="float: left; margin: 15px;">'''24-core configuration:'''<br>[[File:zen soc block (24 cores).svg|450px]]</div> |
− | <div style="float: left; margin: 15px;">'''16-core configuration:'''<br>[[File:zen soc block (16 cores).svg| | + | <div style="float: left; margin: 15px;">'''16-core configuration:'''<br>[[File:zen soc block (16 cores).svg|450px]]</div> |
− | <div style="float: left; margin: 15px;">'''8-core configuration:'''<br>[[File:zen soc block (8 cores).svg| | + | <div style="float: left; margin: 15px;">'''8-core configuration:'''<br>[[File:zen soc block (8 cores).svg|450px]]</div> |
</div> | </div> | ||
+ | |||
+ | ===== {{amd|Naples|l=core}} ===== | ||
+ | [[File:AMD Naples SoC.svg|900px]] | ||
=== Memory Hierarchy === | === Memory Hierarchy === | ||
Line 337: | Line 276: | ||
*** 2,048 µOPs, 8-way set associative | *** 2,048 µOPs, 8-way set associative | ||
**** 32-sets, 8-µOP line size | **** 32-sets, 8-µOP line size | ||
− | |||
** L1I Cache: | ** L1I Cache: | ||
*** 64 KiB 4-way set associative | *** 64 KiB 4-way set associative | ||
**** 256-sets, 64 B line size | **** 256-sets, 64 B line size | ||
− | **** | + | **** shared by the two threads, per core |
− | |||
** L1D Cache: | ** L1D Cache: | ||
*** 32 KiB 8-way set associative | *** 32 KiB 8-way set associative | ||
**** 64-sets, 64 B line size | **** 64-sets, 64 B line size | ||
− | **** | + | **** write-back policy |
*** 4-5 cycles latency for Int | *** 4-5 cycles latency for Int | ||
*** 7-8 cycles latency for FP | *** 7-8 cycles latency for FP | ||
− | |||
** L2 Cache: | ** L2 Cache: | ||
*** 512 KiB 8-way set associative | *** 512 KiB 8-way set associative | ||
*** 1,024-sets, 64 B line size | *** 1,024-sets, 64 B line size | ||
− | *** | + | *** write-back policy |
*** Inclusive of L1 | *** Inclusive of L1 | ||
− | + | *** 17 cycles latency | |
− | |||
− | |||
− | |||
** L3 Cache: | ** L3 Cache: | ||
*** Victim cache | *** Victim cache | ||
− | *** | + | *** 8 MiB/CCX, shared across all cores. |
− | |||
*** 16-way set associative | *** 16-way set associative | ||
**** 8,192-sets, 64 B line size | **** 8,192-sets, 64 B line size | ||
*** 40 cycles latency | *** 40 cycles latency | ||
− | |||
** System DRAM: | ** System DRAM: | ||
− | *** 2 | + | *** 2 Channels |
− | *** | + | *** Up to DRR4-2666 |
− | *** | + | *** ECC Support |
− | |||
− | |||
Zen TLB consists of dedicated level one TLB for instruction cache and another one for data cache. | Zen TLB consists of dedicated level one TLB for instruction cache and another one for data cache. | ||
Line 381: | Line 310: | ||
*** 64 entry L1 TLB, all page sizes | *** 64 entry L1 TLB, all page sizes | ||
*** 512 entry L2 TLB, no 1G pages | *** 512 entry L2 TLB, no 1G pages | ||
− | |||
** DTLB | ** DTLB | ||
*** 64 entry L1 TLB, all page sizes | *** 64 entry L1 TLB, all page sizes | ||
*** 1,532-entry L2 TLB, no 1G pages | *** 1,532-entry L2 TLB, no 1G pages | ||
− | |||
== Core == | == Core == | ||
Line 401: | Line 328: | ||
Unlike many of Intel's recent microarchitectures (such as {{intel|Skylake|l=arch}} and {{intel|Kaby Lake|l=arch}}) which make use of a unified scheduler, AMD continue to use a split pipeline design. µOP are decoupled at the µOP Queue and are sent through the two distinct pipelines to either the Integer side or the FP side. The two sections are completely separate, each featuring separate schedulers, queues, and execution units. The Integer side splits up the µOPs via a set of individual schedulers that feed the various ALU units. On the floating point side, there is a different scheduler to handle the 128-bit FP operations. Zen support all modern {{x86|extensions|x86 extensions}} including {{x86|AVX}}/{{x86|AVX2}}, {{x86|BMI1}}/{{x86|BMI2}}, and {{x86|AES}}. Zen also supports {{x86|SHA}}, secure hash implementation instructions that are currently only found in [[Intel]]'s ultra-low power microarchitectures (e.g. {{intel|Goldmont|l=arch}}) but not in their mainstream processors. | Unlike many of Intel's recent microarchitectures (such as {{intel|Skylake|l=arch}} and {{intel|Kaby Lake|l=arch}}) which make use of a unified scheduler, AMD continue to use a split pipeline design. µOP are decoupled at the µOP Queue and are sent through the two distinct pipelines to either the Integer side or the FP side. The two sections are completely separate, each featuring separate schedulers, queues, and execution units. The Integer side splits up the µOPs via a set of individual schedulers that feed the various ALU units. On the floating point side, there is a different scheduler to handle the 128-bit FP operations. Zen support all modern {{x86|extensions|x86 extensions}} including {{x86|AVX}}/{{x86|AVX2}}, {{x86|BMI1}}/{{x86|BMI2}}, and {{x86|AES}}. Zen also supports {{x86|SHA}}, secure hash implementation instructions that are currently only found in [[Intel]]'s ultra-low power microarchitectures (e.g. {{intel|Goldmont|l=arch}}) but not in their mainstream processors. | ||
− | From the memory subsystem point of view, data is fed into the execution units from the [[L1D$]] via the load and store queue (both of which were almost doubled in capacity) via the two [[Address Generation Units]] (AGUs) at the rate of 2 loads and 1 store per cycle. Each core also has a 512 KiB level 2 cache. L2 feeds both the the level 1 data and level 1 instruction caches at 32B per cycle (32B can be | + | From the memory subsystem point of view, data is fed into the execution units from the [[L1D$]] via the load and store queue (both of which were almost doubled in capacity) via the two [[Address Generation Units]] (AGUs) at the rate of 2 loads and 1 store per cycle. Each core also has a 512 KiB level 2 cache. L2 feeds both the the level 1 data and level 1 instruction caches at 32B per cycle (32B can be send in either direction (bidirectional bus) each cycle). L2 is connected to the L3 cache which is shared across all cores. As with the L1 to L2 transfers, the L2 also transfers data to the L3 and vice versa at 32B per cycle (32B in either direction each cycle). |
{{clear}} | {{clear}} | ||
Line 417: | Line 344: | ||
* 512-entry L2 TLB, no 1G pages | * 512-entry L2 TLB, no 1G pages | ||
− | ===== | + | ===== fetching ===== |
Instructions are fetched from the [[L2 cache]] at the rate of 32B/cycle. Zen features an asymmetric [[level 1 cache]] with a 64 [[KiB]] [[instruction cache]], double the size of the L1 data cache. Depending on the branch prediction decision instructions may be fetched from the instruction cache or from the [[µOPs]] cache in which eliminates the need for performing the costly instruction decoding. | Instructions are fetched from the [[L2 cache]] at the rate of 32B/cycle. Zen features an asymmetric [[level 1 cache]] with a 64 [[KiB]] [[instruction cache]], double the size of the L1 data cache. Depending on the branch prediction decision instructions may be fetched from the instruction cache or from the [[µOPs]] cache in which eliminates the need for performing the costly instruction decoding. | ||
[[File:amd zen hc28 decode.png|left|300px]] | [[File:amd zen hc28 decode.png|left|300px]] | ||
− | On the traditional side of decode, instructions are fetched from the L1$ at 32B aligned bytes per cycle and go to the instruction byte buffer and through the pick stage to the decode. Actual tests show the effective throughput is generally much lower (around 16-20 bytes). This is slightly higher than the fetch window in [[Intel]]'s {{intel|Skylake}} which has a 16-byte fetch window. The size of the instruction byte buffer | + | On the traditional side of decode, instructions are fetched from the L1$ at 32B aligned bytes per cycle and go to the instruction byte buffer and through the pick stage to the decode. Actual tests show the effective throughput is generally much lower (around 16-20 bytes). This is slightly higher than the fetch window in [[Intel]]'s {{intel|Skylake}} which has a 16-byte fetch window. The size of the instruction byte buffer was not given by AMD but it's expected to be larger than the 16-entry structure found in {{amd|microarchitectures|their previous}} architecture. |
===== µOP cache & x86 tax ===== | ===== µOP cache & x86 tax ===== | ||
Line 427: | Line 354: | ||
The [[µOP cache]] used in Zen is not a [[trace cache]] and much closely resembles the one used by Intel in their microarchitectures since {{intel|Sandy Bridge|l=arch}}. The µOP cache is an independent unit not part of the [[L1I$]] and is not a necessarily a subset of the L1I cache either; I.e., there are instances where there could be a hit in the µOP cache but a miss in the L1$. This happens when an instruction that got stored in the µOP cache gets evicted from L1. During the fetch stage probing must be done from both paths. Zen has a specific unit called 'Micro-Tags' which does the probing and determines whether the instruction should be accessed from the µOP cache or from the L1I$. The µOP cache itself has a dedicated $tags for accessing those µOPs. | The [[µOP cache]] used in Zen is not a [[trace cache]] and much closely resembles the one used by Intel in their microarchitectures since {{intel|Sandy Bridge|l=arch}}. The µOP cache is an independent unit not part of the [[L1I$]] and is not a necessarily a subset of the L1I cache either; I.e., there are instances where there could be a hit in the µOP cache but a miss in the L1$. This happens when an instruction that got stored in the µOP cache gets evicted from L1. During the fetch stage probing must be done from both paths. Zen has a specific unit called 'Micro-Tags' which does the probing and determines whether the instruction should be accessed from the µOP cache or from the L1I$. The µOP cache itself has a dedicated $tags for accessing those µOPs. | ||
− | ===== | + | ===== decode ===== |
[[File:amd fastpath single-double (zen).svg|right|450px]] | [[File:amd fastpath single-double (zen).svg|right|450px]] | ||
Having to execute [[x86]], there are instructions that actually include multiple operations. Some of those operations cannot be realized efficiently in an OoOE design and therefore must be converted into simpler operations. In the front-end, complex x86 instructions are broken down into simpler fixed-length operations called [[macro-operations]] or MOPs (sometimes also called complex OPs or COPs). Those are often mistaken for being "[[RISC]]ish" in nature but they retain their CISC characteristics. MOPS can perform both an arithmetic operation and memory operation (e.g. you can read, modify, and write in a single MOP). MOPs can be further cracked into smaller simpler single fixed length operation called [[micro-operations]] (µOPs). µOPs are a fixed length operation that performs just a single operation (i.e., only a single load, store, or an arithmetic). Traditionally AMD used to distinguish between the two ops, however with Zen AMD simply refers to everything as µOPs although internally they are still two separate concepts. | Having to execute [[x86]], there are instructions that actually include multiple operations. Some of those operations cannot be realized efficiently in an OoOE design and therefore must be converted into simpler operations. In the front-end, complex x86 instructions are broken down into simpler fixed-length operations called [[macro-operations]] or MOPs (sometimes also called complex OPs or COPs). Those are often mistaken for being "[[RISC]]ish" in nature but they retain their CISC characteristics. MOPS can perform both an arithmetic operation and memory operation (e.g. you can read, modify, and write in a single MOP). MOPs can be further cracked into smaller simpler single fixed length operation called [[micro-operations]] (µOPs). µOPs are a fixed length operation that performs just a single operation (i.e., only a single load, store, or an arithmetic). Traditionally AMD used to distinguish between the two ops, however with Zen AMD simply refers to everything as µOPs although internally they are still two separate concepts. | ||
Line 433: | Line 360: | ||
Decoding is done by the 4 Zen decoders. The decode stage allows for four [[x86]] instructions to be decoded per cycle which are in turn sent to the µOP Queue. Previously, in the {{\\|Bulldozer}}/{{\\|Jaguar}}-based designs AMD had two paths: a FastPath Single which emitted a single MOP and a FastPath Double which emitted two MOPs which are in turn sent down the pipe to the schedulers. Michael Clark (Zen's lead architect) noted that Zen has significantly denser MOPs meaning almost all instructions will be a FastPath Single (i.e., one to one transformations). What would normally get broken down into two MOPs in {{\\|Bulldozer}} is now translated into a single dense MOP. It's for those reasons that while up to 8MOPs/cycle can be emitted, usually only 4MOPs/cycle are emitted from the [[instruction decoder|decoders]]. | Decoding is done by the 4 Zen decoders. The decode stage allows for four [[x86]] instructions to be decoded per cycle which are in turn sent to the µOP Queue. Previously, in the {{\\|Bulldozer}}/{{\\|Jaguar}}-based designs AMD had two paths: a FastPath Single which emitted a single MOP and a FastPath Double which emitted two MOPs which are in turn sent down the pipe to the schedulers. Michael Clark (Zen's lead architect) noted that Zen has significantly denser MOPs meaning almost all instructions will be a FastPath Single (i.e., one to one transformations). What would normally get broken down into two MOPs in {{\\|Bulldozer}} is now translated into a single dense MOP. It's for those reasons that while up to 8MOPs/cycle can be emitted, usually only 4MOPs/cycle are emitted from the [[instruction decoder|decoders]]. | ||
− | Dispatch is capable of sending up to 6 µOP to [[Integer]] EX and an | + | Dispatch is capable of sending up to 6 µOP to [[Integer]] EX and an additional 4 µOP to the [[Floating Point]] (FP) EX. Zen can dispatch to both at the same time (i.e. for a maximum of 10 µOP per cycle). |
====== MSROM ====== | ====== MSROM ====== | ||
Line 441: | Line 368: | ||
A number of optimization opportunities are exploited at this stage. | A number of optimization opportunities are exploited at this stage. | ||
====== Stack Engine ====== | ====== Stack Engine ====== | ||
− | At the decode stage Zen incorporates the the Stack Engine Memfile (SEM). Note that while AMD refers to SEM as a new unit, they have had a Stack Engine in their designs since {{\\|K10}}. The Memfile sits between the queue and dispatch monitoring the MOP traffic. The Memfile is capable of performing [[store-to-load forwarding]] right at dispatch for loads that trail behind known stores with physical addresses. Other things such as eliminating stack PUSH/POP operations are also done at this stage so they are effectively a zero-latency instructions; proceeding instructions that | + | At the decode stage Zen incorporates the the Stack Engine Memfile (SEM). Note that while AMD refers to SEM as a new unit, they have had a Stack Engine in their designs since {{\\|K10}}. The Memfile sits between the queue and dispatch monitoring the MOP traffic. The Memfile is capable of performing [[store-to-load forwarding]] right at dispatch for loads that trail behind known stores with physical addresses. Other things such as eliminating stack PUSH/POP operations are also done at this stage so they are effectively a zero-latency instructions; proceeding instructions that relay on the stack pointer are not delayed. This is a fairly effective low-power solution that off-loads some of the work that would otherwise be done by [[AGU]]. |
====== µOP-Fusion ====== | ====== µOP-Fusion ====== | ||
Line 451: | Line 378: | ||
==== Execution Engine ==== | ==== Execution Engine ==== | ||
[[File:amd zen hc28 integer.png|350px|right]] | [[File:amd zen hc28 integer.png|350px|right]] | ||
− | As mentioned early, Zen returns to a fully partitioned core design with a private L2 cache and private [[FP]]/[[SIMD]] units. Previously those units shared resources spanning two cores. Zen's Execution Engine (Back-End) is split into two major sections: [[integer]] & memory operations and [[floating point]] operations. The two sections are decoupled with independent [[register renaming|renaming]], [[schedulers]], [[queues]], and execution units. Both Integer and FP sections have access to the [[Retire Queue]] which is 192 entries | + | As mentioned early, Zen returns to a fully partitioned core design with a private L2 cache and private [[FP]]/[[SIMD]] units. Previously those units shared resources spanning two cores. Zen's Execution Engine (Back-End) is split into two major sections: [[integer]] & memory operations and [[floating point]] operations. The two sections are decoupled with independent [[register renaming|renaming]], [[schedulers]], [[queues]], and execution units. Both Integer and FP sections have access to the [[Retire Queue]] which is 192 entries and can [[retire]] 8 instructions per cycle (independent of either Integer or FP). The wider-than-dispatch retire allows Zen to catch up and free the resources much quicker (previous architectures saw bottleneck at this point in situations where an older op is stalling causing a reduction in performance due to retire needing to catch up to the front of the machine). |
Because the two regions are entirely divided, a penalty of one cycle latency will incur for operands that crosses boundaries; for example, if an [[operand]] of an integer arithmetic µOP depends on the result of a floating point µOP operation. This applies both ways. This is a similar to the inter-[[Common Data Bus]] exchanges in Intel's designs (e.g., {{intel|Skylake|l=arch}}) which incur a delay of 1 to 2 cycles when dependent operands cross domains. | Because the two regions are entirely divided, a penalty of one cycle latency will incur for operands that crosses boundaries; for example, if an [[operand]] of an integer arithmetic µOP depends on the result of a floating point µOP operation. This applies both ways. This is a similar to the inter-[[Common Data Bus]] exchanges in Intel's designs (e.g., {{intel|Skylake|l=arch}}) which incur a delay of 1 to 2 cycles when dependent operands cross domains. | ||
Line 472: | Line 399: | ||
The FP deals with all vector operations. The simple integer vector operations (e.g. shift, add) can all be done in one cycle, half the latency of AMD's previous architecture. Basic [[floating point]] math has a latency of three cycles including [[multiplication]] (one additional cycle for double precision). [[Fused multiply-add]] are five cycles. | The FP deals with all vector operations. The simple integer vector operations (e.g. shift, add) can all be done in one cycle, half the latency of AMD's previous architecture. Basic [[floating point]] math has a latency of three cycles including [[multiplication]] (one additional cycle for double precision). [[Fused multiply-add]] are five cycles. | ||
− | The FP has a single pipe for 128-bit load operations. In fact, the entire FP side is optimized for 128-bit operations. Zen supports all the latest instructions such as SSE and {{x86|AVX1}}/{{x86|AVX2|2}}. The way 256-bit AVX was designed was so that they can be carried out as two independent 128-bit operations. Zen takes advantage of that by operating on those instructions as two operations; i.e., Zen splits up 256-bit operations into two µOPs so they are effectively half the throughput of their 128-bit operations counterparts. Likewise, stores are also done on 128-bit chunks, making 256-bit loads have an effective throughput of one store every two cycles. The pipes are fairly well | + | The FP has a single pipe for 128-bit load operations. In fact, the entire FP side is optimized for 128-bit operations. Zen supports all the latest instructions such as SSE and {{x86|AVX1}}/{{x86|AVX2|2}}. The way 256-bit AVX was designed was so that they can be carried out as two independent 128-bit operations. Zen takes advantage of that by operating on those instructions as two operations; i.e., Zen splits up 256-bit operations into two µOPs so they are effectively half the throughput of their 128-bit operations counterparts. Likewise, stores are also done on 128-bit chunks, making 256-bit loads have an effective throughput of one store every two cycles. The pipes are fairly well balances, therefore most operations will have at least two pipes to be scheduled on retaining the throughput of at least on such instruction each cycle. As implies, 256-bit operations will use up twice the resources to complete (i.e., 2x register, scheduler, and ports). This is a compromise [[AMD]] has taken which helps conserve die space and power. By contrast, [[Intel]]'s competing design, {{intel|Skylake}}, does have dedicated 256-bit circuitry. |
Additionally Zen also supports {{x86|SHA}} and {{x86|AES}} with 2 AES units implemented in an attempt to improve encryption performance. Those units can be found on pipes 0 and 1 of the floating point scheduler. | Additionally Zen also supports {{x86|SHA}} and {{x86|AES}} with 2 AES units implemented in an attempt to improve encryption performance. Those units can be found on pipes 0 and 1 of the floating point scheduler. | ||
Line 479: | Line 406: | ||
==== Memory Subsystem ==== | ==== Memory Subsystem ==== | ||
[[File:amd zen hc28 memory.png|300px|right]] | [[File:amd zen hc28 memory.png|300px|right]] | ||
− | Loads and Stores are conducted via the two AGUs which can operate simultaneously. Zen has a | + | Loads and Stores are conducted via the two AGUs which can operate simultaneously. Zen has a much larger load queue capable of supporting 72 out-of-order loads (same as Intel's {{intel|Skylake|l=arch}}). There is also a 44-entry Store Queue. Zen employs a split TLB-data pipe design which allows TLB tag access to take place while the data cache is being fed in order to determine if the data is available and send their address to the L2 to start prefetching early on. Zen is capable of up to two loads per cycle (2x16B each) and up to one store per cycle (1x16B). The L1 TLB is 64-entry for all page sizes and the L2 TLB is a 1536-entry with no 1 GiB pages. |
− | Zen incorporates a 64 KiB 4-way set associative L1 instruction cache and a 32 KiB 8-way set associative | + | Zen incorporates a 64 KiB 4-way set associative L1 instruction cache and a 32 KiB 8-way set associative L2 data cache. Both the instruction cache and the data cache can fetch from the L2 cache at 32 Bytes per cycle. The L2 cache is a 512 KiB 8-way set associative unified cache, inclusive, and private to the core. The L2 cache can fetch and write 32B/cycle into the L3 (32B in either direction each cycle, i.e. bidirectional bus). |
== Infinity Fabric == | == Infinity Fabric == | ||
Line 500: | Line 427: | ||
[[File:zen soc clock domain.svg|650px]] | [[File:zen soc clock domain.svg|650px]] | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Power == | == Power == | ||
Line 539: | Line 437: | ||
* '''VDDM''' - [[L2]]/[[L3]] [[SRAM]] supply | * '''VDDM''' - [[L2]]/[[L3]] [[SRAM]] supply | ||
</div> | </div> | ||
− | Zen presented AMD with a number of new challenges in the area of power largely due to their decision to cover the entire spectrum of systems from ultra-low power to high performance. Previously AMD handled this by designing two independent architectures (i.e., {{\\|Excavator}} and {{\\|Puma}}). In Zen, SoC voltage coming from the | + | Zen presented AMD with a number of new challenges in the area of power largely due to their decision to cover the entire spectrum of systems from ultra-low power to high performance. Previously AMD handled this by designing two independent architectures (i.e., {{\\|Excavator}} and {{\\|Puma}}). In Zen, SoC voltage coming from the Voltage Regulator Module (VRM) is fed to the RVDD, a package metal plane that distributes the highest VID request from all cores. In Zen, each core has a digital [[LDO regulator]] (low-dropout) and a [[digital frequency synthesizer]] (DFS) to vary frequency and voltage across power states on individual core basis. The LDO regulates RVDD for each power domain and create an optimal VDD per core using a system of sensors they've embedded across the entire chip; this is in addition to other properties such as countermeasures against droop. This is in contrast to some alternative solutions by [[Intel]] which attempted to integrated the voltage regulator (FIVR) on die in {{intel|Haswell|l=arch}} (and consequently removing it in {{intel|Skylake|l=arch}} due to a number of thermal restrictions it created). Zen's new voltage control is an attempt at a much finer power tuning on a per core level based on a collection of information it has on that core and overall chip. |
<div style="display: block;"> | <div style="display: block;"> | ||
Line 556: | Line 454: | ||
Zen implements over 1300 sensors to monitor the state of the die over all [[critical paths]] including the CCX and external components such as the memory fabric. Additionally the CCX also incorporates 48 high-speed power supply monitors, 20 [[thermal diodes]], and 9 high-speed droop detectors. | Zen implements over 1300 sensors to monitor the state of the die over all [[critical paths]] including the CCX and external components such as the memory fabric. Additionally the CCX also incorporates 48 high-speed power supply monitors, 20 [[thermal diodes]], and 9 high-speed droop detectors. | ||
<div style="text-align: center;">[[File:zen pure power sensory.png|600px]]</div> | <div style="text-align: center;">[[File:zen pure power sensory.png|600px]]</div> | ||
− | |||
− | |||
− | |||
== Features == | == Features == | ||
Line 585: | Line 480: | ||
[[File:10682-icon-neural-net-prediction-140x140.png|50px|left]] | [[File:10682-icon-neural-net-prediction-140x140.png|50px|left]] | ||
− | '''Neural Net Prediction''' - This appears to be largely marketing term for Zen's much beefier and more finely | + | '''Neural Net Prediction''' - This appears to be largely marketing term for Zen's much beefier and more finely tune [[branch prediction]] unit. Zen uses a [[perceptron branch predictor|hashed perceptron system]] to intelligently anticipate future code flows, allowing warming up of cold blocks in order to avoid possible waits. Most of that functionality is already found on every modern high-end microprocessor (including AMD's own previous microarchitectures). Because AMD has not disclosed any more specific information about BP, it can only be speculated that no new groundbreaking logic was introduced in Zen. |
{{clear}} | {{clear}} | ||
[[File:10682-icon-smart-prefetch-140x140.png|50px|left]] | [[File:10682-icon-smart-prefetch-140x140.png|50px|left]] | ||
Line 595: | Line 490: | ||
{{clear}} | {{clear}} | ||
[[File:10682-icon-precision-boost-140x140.png|50px|left]] | [[File:10682-icon-precision-boost-140x140.png|50px|left]] | ||
− | ''' | + | '''Precision Boost''' - A feature that provides the ability to adjust the frequency of the processor on-the-fly given sufficient headroom (e.g. thermal limits based on the sensory data collected by a network of sensors across the chip), i.e. "Turbo Frequency". Precision Boost adjusts in 25 MHz increments, considerably more granular when compared to Intel's {{intel|Turbo Boost}} which operates at 100 MHz bin increments. Having more granular boost increments in theory could allow it to clock slightly higher than competitor's products without reaching thermal limits (e.g., complex workloads involving {{x86|AVX2}}). |
{{clear}} | {{clear}} | ||
[[File:amd zen xfr.jpg|300px|right]] | [[File:amd zen xfr.jpg|300px|right]] | ||
[[File:10682-icon-frequency-range-140x140.png|50px|left]] | [[File:10682-icon-frequency-range-140x140.png|50px|left]] | ||
− | ''' | + | '''Extended Frequency Range''' ('''XFR''') - This is a fully automated solution that attempts to allow higher upper limit on the maximum frequency based on the cooling technique used (e.g. air, water, LN2). Whenever the chip senses that it's suitable enough for a given frequency, it will attempt to increase that limit further. XFR is partially enabled on all models, providing an extra +50 MHz frequency boost whenever possible. For 'X' models, full XFR is enabled providing twice the headroom of up to +100 MHz. |
{{clear}} | {{clear}} | ||
− | The AMD presentation slide on the right depicts a normal use case for the {{amd|Ryzen 7}} {{amd|Ryzen 7/1800X|1800X}}. When under normal workload, the processor will operate at around its base frequency of 3.6 GHz. When | + | The AMD presentation slide on the right depicts a normal use case for the {{amd|Ryzen 7}} {{amd|Ryzen 7/1800X|1800X}}. When under normal workload, the processor will operate at around its base frequency of 3.6 GHz. When expericing heavier workload, Precision Boost will kick in increment it as necessary up to its maximum frequency of 4 GHz. With adequate cooling, {{amd|XFR}} will bump it up an additional 100 MHz. When light workload get experienced, the processor will reduce its frequency. As Pure Power senses the workload and CPU state, it can also drastically downclock the CPU when appropriate (such as in the graph during mostly idle). |
<div style="text-align: center;">[[File:ryzen-xfr-1800x example.jpg|700px]]</div> | <div style="text-align: center;">[[File:ryzen-xfr-1800x example.jpg|700px]]</div> | ||
{{clear}} | {{clear}} | ||
Line 628: | Line 523: | ||
Each Zeppelin provides 32 Gen 3.0 [[PCIe]] lanes for a total of 128 lanes. In a single-socket configuration, all 128 lanes may be used for general purpose I/O - for example 6 GPUs over x16 and x8 more lanes for additional storage. This is considerably more than any comparable contemporary [[Intel]] model (either {{intel|Broadwell EP|l=core}} or {{intel|Skylake SP|l=core}}). {{amd|Naples|l=core}}-based processors scale all the way up to [[32 cores]] with 64 [[threads]] (for up to 64 cores and 128 threads per complete system). The caveat is that when in 2-way MP mode, half of the lanes are lost. 64 of the 128 of the PCIe lanes get allocated for interchip communication via AMD's {{amd|Infinity Fabrics}} protocols with the remaining 64 lanes left for the system. 64 PCIe lanes for socket-to-socket communication provides a maximum bandiwdth of This setup still leaves the system with 128 PCIe lanes, but it's not any more than in a single-socket configuration. | Each Zeppelin provides 32 Gen 3.0 [[PCIe]] lanes for a total of 128 lanes. In a single-socket configuration, all 128 lanes may be used for general purpose I/O - for example 6 GPUs over x16 and x8 more lanes for additional storage. This is considerably more than any comparable contemporary [[Intel]] model (either {{intel|Broadwell EP|l=core}} or {{intel|Skylake SP|l=core}}). {{amd|Naples|l=core}}-based processors scale all the way up to [[32 cores]] with 64 [[threads]] (for up to 64 cores and 128 threads per complete system). The caveat is that when in 2-way MP mode, half of the lanes are lost. 64 of the 128 of the PCIe lanes get allocated for interchip communication via AMD's {{amd|Infinity Fabrics}} protocols with the remaining 64 lanes left for the system. 64 PCIe lanes for socket-to-socket communication provides a maximum bandiwdth of This setup still leaves the system with 128 PCIe lanes, but it's not any more than in a single-socket configuration. | ||
− | In addition to PCIe lanes, each Zeppelin provides a memory controller supporting dual-channel [[ECC]] DDR4 memory. With EPYC packing 4 | + | In addition to PCIe lanes, each Zeppelin provides a memory controller supporting dual-channel [[ECC]] DDR4 memory. With EPYC packing 4 suck dies, each chip sports 4 memory controllers supporting up 16 DIMMs of 2 [[TiB]] octa-channel DDR4 ECC memory. |
<div style="display: block;"> | <div style="display: block;"> | ||
Line 635: | Line 530: | ||
</div> | </div> | ||
− | + | In addition to the large amount of memory supported by the four Zeppelin, all EPYC offer the full 64 MiB a result from 8 MiB from each of the 8 CCXs. The way binning is done for the various EPYC models is by disabing either 1, 2, 3, or 4 cores per CCX from each of the Zeppelin dies to form either [[8 cores|8]], [[16 cores|16]], [[24 cores|24]], or [[32 cores|32]]. | |
− | |||
− | In addition to the large amount of memory supported by the four Zeppelin, all EPYC offer the full 64 MiB a result | ||
<div style="display: block;"> | <div style="display: block;"> | ||
+ | [[File:amd naples mcp.png|250px]] | ||
[[File:amd epyc interconnect.png|650px]] | [[File:amd epyc interconnect.png|650px]] | ||
− | + | </div> | |
− | |||
− | |||
− | |||
− | + | {{clear}} | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
=== Modules (Zeppelin) === | === Modules (Zeppelin) === | ||
− | + | Zen is composed of individual modules (i.e., dies) called '''Zeppelins''' that can be interconnected in a multi-chip module to form larger systems. Each module consist of: | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | Each module consist of: | ||
* 2 Core Complexes (CCX) | * 2 Core Complexes (CCX) | ||
Line 677: | Line 548: | ||
** 2x Unified Memory Controllers (UMC) - one DRAM channel each; 64-bit data + [[ECC]] support, 2 DIMMs, DDR4 1333MT/s-3200MT/s | ** 2x Unified Memory Controllers (UMC) - one DRAM channel each; 64-bit data + [[ECC]] support, 2 DIMMs, DDR4 1333MT/s-3200MT/s | ||
* PSP (MP0) and SMU (MP1) microcontrollers | * PSP (MP0) and SMU (MP1) microcontrollers | ||
− | ** AMD Secure Processor | + | ** AMD Secure Processor technology as Platform Security Processor (PSP) |
− | |||
* NBIO | * NBIO | ||
** 2 SYSHUBs, 1 IOHUB with IOMMU v2.x | ** 2 SYSHUBs, 1 IOHUB with IOMMU v2.x | ||
** 2x8 PCIe Gen1/Gen2/Gen3 | ** 2x8 PCIe Gen1/Gen2/Gen3 | ||
* 6 x4 PHYs plus 5 x2 PHYs | * 6 x4 PHYs plus 5 x2 PHYs | ||
− | ** Support PCIe, WAFL, xGMI | + | ** Support PCIe, WAFL, xGMI, SATA, and Ethernet |
*** Ethernet complex: Up to 4 lanes of 10/100/1000 SGMII, or 10GBASE-KR, or 1000BASE-KX Ethernet operation | *** Ethernet complex: Up to 4 lanes of 10/100/1000 SGMII, or 10GBASE-KR, or 1000BASE-KX Ethernet operation | ||
* Southbridge | * Southbridge | ||
Line 695: | Line 565: | ||
[[File:zen soc.png|900px]] | [[File:zen soc.png|900px]] | ||
</div> | </div> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== Die == | == Die == | ||
Line 754: | Line 572: | ||
* 7 mm² area | * 7 mm² area | ||
* L2 512 KiB; 1.5 mm²/core | * L2 512 KiB; 1.5 mm²/core | ||
− | + | [[File:amd zen core die.png|400px]] | |
− | |||
− | |||
− | |||
− | |||
− | |||
=== CCX === | === CCX === | ||
Line 765: | Line 578: | ||
* L3 8 MiB; 16 mm² | * L3 8 MiB; 16 mm² | ||
* 1,400,000,000 transistors | * 1,400,000,000 transistors | ||
+ | [[File:amd zen ccx.png|450px]] | ||
− | + | === Zeppelin (Octa-Core Die) === | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | === | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
* [[14 nm process]] | * [[14 nm process]] | ||
* 12 metal layers | * 12 metal layers | ||
* 2,000 meters of signals | * 2,000 meters of signals | ||
* 4,800,000,000 transistors | * 4,800,000,000 transistors | ||
− | * | + | * 22.01 mm x 8.87 mm |
− | * | + | * ~195.228 mm² die size |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | [[File:amd zen octa-core die shot.png|950px]] | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | {{future information}} | ||
− | + | [[File:amd zen octa-core die shot (annotated).png|950px]] | |
== Sockets/Platform == | == Sockets/Platform == | ||
− | All Zen-based | + | All Zen-based consumer microprocessors utilizes AMD's {{amd|Socket AM4}}, a unified socket infrastructure. It's interesting to note that every {{amd|Ryzen 7}} processor is actually a complete [[system on a chip]] integrating the [[northbridge]] ([[memory controller]]) and the [[southbridge]] including 16 [[PCIe]] lanes for the [[GPU]], 4 PCIe lanes for I/O along with an [[NVMe controller]] as well as USB 3.0 and SATA controllers. Therefore in theory, Ryzen 7 processors do not even require a [[chipset]]. The role of the chipsets for Zen is to simply provide a number of additional connections beyond that offered by the SoC. |
{{amd socket am4 chipsets}} | {{amd socket am4 chipsets}} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
== All Zen Chips == | == All Zen Chips == | ||
Line 835: | Line 611: | ||
{{comp table start}} | {{comp table start}} | ||
<table class="comptable sortable tc13 tc14 tc15 tc16 tc17 tc18 tc19"> | <table class="comptable sortable tc13 tc14 tc15 tc16 tc17 tc18 tc19"> | ||
− | <tr class="comptable-header"><th> </th><th colspan=" | + | <tr class="comptable-header"><th> </th><th colspan="19">List of all Zen-based Processors</th></tr> |
− | <tr class="comptable-header"><th> </th><th colspan="14">Processor</th><th colspan=" | + | <tr class="comptable-header"><th> </th><th colspan="14">Processor</th><th colspan="3">Features</th></tr> |
− | {{comp table header 1|cols=Price, Process, Launched, Family, Core, C, T, L3$, L2$, L1$, Freq, Turbo, TDP, Max Mem, SMT, AMD-V, XFR | + | {{comp table header 1|cols=Price, Process, Launched, Family, Core, C, T, L3$, L2$, L1$, Freq, Turbo, TDP, Max Mem, SMT, AMD-V, XFR}} |
<tr class="comptable-header comptable-header-sep"><th> </th><th colspan="25">[[Uniprocessors]]</th></tr> | <tr class="comptable-header comptable-header-sep"><th> </th><th colspan="25">[[Uniprocessors]]</th></tr> | ||
{{#ask: [[Category:microprocessor models by amd]] [[instance of::microprocessor]] [[microarchitecture::Zen]] [[max cpu count::1]] | {{#ask: [[Category:microprocessor models by amd]] [[instance of::microprocessor]] [[microarchitecture::Zen]] [[max cpu count::1]] | ||
Line 859: | Line 635: | ||
|?has amd amd-vi technology | |?has amd amd-vi technology | ||
|?has amd extended frequency range | |?has amd extended frequency range | ||
− | |||
− | |||
− | |||
|format=template | |format=template | ||
|template=proc table 3 | |template=proc table 3 | ||
− | |userparam= | + | |userparam=19:17 |
|mainlabel=- | |mainlabel=- | ||
|valuesep=, | |valuesep=, | ||
− | |||
}} | }} | ||
<tr class="comptable-header comptable-header-sep"><th> </th><th colspan="25">[[Multiprocessors]] (dual-socket)</th></tr> | <tr class="comptable-header comptable-header-sep"><th> </th><th colspan="25">[[Multiprocessors]] (dual-socket)</th></tr> | ||
Line 890: | Line 662: | ||
|?has amd amd-vi technology | |?has amd amd-vi technology | ||
|?has amd extended frequency range | |?has amd extended frequency range | ||
− | |||
− | |||
− | |||
|format=template | |format=template | ||
|template=proc table 3 | |template=proc table 3 | ||
− | |userparam= | + | |userparam=19:17 |
|mainlabel=- | |mainlabel=- | ||
|valuesep=, | |valuesep=, | ||
− | |||
}} | }} | ||
{{comp table count|ask=[[Category:microprocessor models by amd]] [[instance of::microprocessor]] [[microarchitecture::Zen]]}} | {{comp table count|ask=[[Category:microprocessor models by amd]] [[instance of::microprocessor]] [[microarchitecture::Zen]]}} | ||
Line 904: | Line 672: | ||
{{comp table end}} | {{comp table end}} | ||
− | == | + | == References == |
− | * | + | * Michael Clark, AMD's senior fellow and lead architect, Hot Chips 28, 2016 |
− | |||
− | |||
− | |||
− | |||
* Lisa Su, AMD CEO, AMD: New Horizon Live Event | * Lisa Su, AMD CEO, AMD: New Horizon Live Event | ||
* Lisa Su, AMD CEO, AMD Annual Meeting of Shareholders Q4 2016 | * Lisa Su, AMD CEO, AMD Annual Meeting of Shareholders Q4 2016 | ||
* Meet the AMD Experts - AMD Monthly Partner Training, January 2017 | * Meet the AMD Experts - AMD Monthly Partner Training, January 2017 | ||
− | * | + | * Zen: A Next-Generation High-Performance x86 Core, ISSCC 2017 |
* AMD 'Tech Day', February 22, 2017 | * AMD 'Tech Day', February 22, 2017 | ||
* AMD Infinity Fabric introduction by Mark Papermaster, April 6, 2017 | * AMD Infinity Fabric introduction by Mark Papermaster, April 6, 2017 | ||
* AMD Zen at GDC 2017, March 3, 2017 | * AMD Zen at GDC 2017, March 3, 2017 | ||
+ | * Processor Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1 Processors | ||
* AMD 2017 Financial Analyst Day, May 16, 2017 | * AMD 2017 Financial Analyst Day, May 16, 2017 | ||
* AMD EPYC Tech Day, June 20, 2017 | * AMD EPYC Tech Day, June 20, 2017 | ||
− | |||
− | |||
− | |||
− | |||
== Documents == | == Documents == | ||
Line 932: | Line 693: | ||
** [[:File:amd epyc performance brief.pdf|AMD EPYC Performance Brief]], June 2017 | ** [[:File:amd epyc performance brief.pdf|AMD EPYC Performance Brief]], June 2017 | ||
** [[:File:amd epyc solution brief.pdf|AMD EPYC Solution Brief]], June 2017 | ** [[:File:amd epyc solution brief.pdf|AMD EPYC Solution Brief]], June 2017 | ||
− | |||
− | |||
− | |||
== See also == | == See also == | ||
* {{intel|Kaby Lake}} | * {{intel|Kaby Lake}} | ||
− | * {{intel| | + | * {{intel|Cannonlake}} |
Facts about "Zen - Microarchitectures - AMD"
codename | Zen + |
core count | 4 +, 6 +, 8 +, 16 +, 24 +, 32 + and 12 + |
designer | AMD + |
first launched | March 2, 2017 + |
full page name | amd/microarchitectures/zen + |
instance of | microarchitecture + |
instruction set architecture | x86-64 + |
manufacturer | GlobalFoundries + |
microarchitecture type | CPU + |
name | Zen + |
pipeline stages | 19 + |
process | 14 nm (0.014 μm, 1.4e-5 mm) + |