From WikiChip
Difference between revisions of "intel/microarchitectures/sunny cove"
< intel‎ | microarchitectures

(Pipeline)
(Broad Overview)
Line 136: Line 136:
 
Some µOPs deal with memory access (e.g. [[instruction load|load]] & [[instruction store|store]]). Those will be sent on dedicated scheduler ports that can perform those memory operations. Store operations go to the store buffer which is also capable of performing forwarding when needed. Likewise, Load operations come from the load buffer. Sunny Cove features a dedicated 48 KiB level 1 data cache and a dedicated 32 KiB level 1 instruction cache. It also features a core-private 512 KiB L2 cache that is shared by both of the L1 caches.
 
Some µOPs deal with memory access (e.g. [[instruction load|load]] & [[instruction store|store]]). Those will be sent on dedicated scheduler ports that can perform those memory operations. Store operations go to the store buffer which is also capable of performing forwarding when needed. Likewise, Load operations come from the load buffer. Sunny Cove features a dedicated 48 KiB level 1 data cache and a dedicated 32 KiB level 1 instruction cache. It also features a core-private 512 KiB L2 cache that is shared by both of the L1 caches.
  
Each core enjoys a slice of a third level of cache that is shared by all the core. For Sunny Cove, there are either [[two cores]] or [[four cores]] connected together on a single chip.
+
Each core enjoys a slice of a third level of cache that is shared by all the core. For {{\\|Ice Lake (Client)}} which incorporates Sunny Cove cores, there are either [[two cores]] or [[four cores]] connected together on a single chip.
 
{{clear}}
 
{{clear}}
  

Revision as of 20:17, 19 May 2019

Edit Values
Sunny Cove µarch
General Info
Arch TypeCPU
DesignerIntel
ManufacturerIntel
Introduction2019
Process10 nm
Core Configs2, 4
Pipeline
TypeSuperscalar
OoOEYes
SpeculativeYes
Reg RenamingYes
Stages14-19
Instructions
ISAx86-64
ExtensionsMOVBE, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, AVX, AVX2, AES, PCLMUL, FSGSBASE, RDRND, FMA3, F16C, BMI, BMI2, VT-x, VT-d, TXT, TSX, RDSEED, ADCX, PREFETCHW, CLFLUSHOPT, XSAVE, SGX, MPX, AVX-512
Cache
L1I Cache32 KiB/core
8-way set associative
L1D Cache48 KiB/core
12-way set associative
L2 Cache512 MiB/core
8-way set associative
L3 Cache2 MiB/core
16-way set associative
Succession

Sunny Cove is the successor to Palm Cove, a high-performance 10 nm x86 core microarchitecture designed by Intel for an array of server and client products, including Ice Lake (Client), Ice Lake (Server), Lakefield, and the Nervana NNP-I. The microarchitecture was developed by Intel's R&D Center (IDC) in Haifa, Israel.

History

Intel Core roadmap

Sunny Cove was originally unveiled by Intel at their 2018 architecture day. Intel originally intended for Sunny Cove to succeed Palm Cove in late 2017 which was it was intended to be the first 10 nm-based core and the proper successor to Skylake. Prolong delays and problems with their 10 nm process and resulted in a number of improvised derivatives of Skylake including Kaby Lake, Coffee Lake, and Comet Lake. For all practical purposes, Palm Cove has been skipped and Intel has gone directly to Sunny Cove. Sunny Cove is expected to debut in mid-2019.

14nm improv 10 delays.svg

Process Technology

Sunny Cove is designed to take advantage of Intel's 10 nm process.

Architecture

Key changes from Palm Cove/Skylake

Skylake to Sunny Cove changes
Sunny Cove enhancements
  • Front-end
    • Larger µOP cache (?, up from 1536)
  • Back-end
    • Wider allocation (5-way, up from 4-way)
    • Larger ROB (?, up from 224 entries)
    • Scheduler
      • Larger scheduler (?, up from 97 entries)
      • Larger dispatch (10-way, up from 8-way)
      • Execution ports rebalanced
      • New store data port
      • New store AGU port
  • Memory subsystem
    • LSU
      • Deeper load queue (?, up from 72 entries)
      • Deeper store queue (?, up from 42 entries)
    • Larger L1 data cache (48 KiB, up from 32 KiB)
    • Larger L2 cache (512 KiB, up from 256 KiB)
      • Larger STLBs
    • 5-Level Paging
      • Large virtual address (57 bits, up from 48 bits)
      • Significantly large virtual address space (128 PiB, up from 256 TiB)

This list is incomplete; you can help by expanding it.

New instructions

Sunny Cove introduced a number of new instructions:

  • CLWB - Force cache line write-back without flush
  • RDPID - Read Processor ID
  • Additional AVX-512 extensions:
  • SSE_GFNI - SSE-based Galois Field New Instructions
  • AVX_GFNI - AVX-based Galois Field New Instructions
  • Split Lock Detection - detection and cause an exception for split locks
  • Fast Short REP MOV

Overview

Sunny Cove is Intel's core microarchitecture for a series of client and server chips that succeed Palm Cove (and effectively the Skylake series of derivatives). Sunny Cove is just the core which is implemented in a numerous chips made by Intel including Lakefield, Ice Lake (Client), Ice Lake (Server), and the Nervana NNP accelerator. Sunny Cove introduces a large set of enhancements that significantly improves the performance of legacy code and new code through the extraction of parallelism as well as new features. Those include a significantly deep out-of-window pipeline, a wider execution back-end, higher load-store bandwidth, lower effective access latencies, and bigger caches.

Pipeline

Like it's predecessors, Sunny Cove focuses on extracting performance and reducing power through a number of key ways. Intel builds Sunny Cove on previous microarchitectures, descendants of Sandy Bridge. For the core to increase the overall performance, Intel focused on extracting additional parallelism.

Broad Overview

At a 5,000 foot view, Sunny Cove represents the logical evolution from Skylake and Haswell. Therefore, despite some significant differences from the previous microarchitecture, the overall designs is fundamentally the same and can be seen as enhancements over Skylake rather than a complete change.

intel common arch post ucache.svg

The pipeline can be broken down into three areas: the front-end, back-end or execution engine, and the memory subsystem. The goal of the front-end is to feed the back-end with a sufficient stream of operations which it gets by decoding instructions coming from memory. The front-end has two major pathways: the µOPs cache path and the legacy path. The legacy path is the traditional path whereby variable-length x86 instructions are fetched from the level 1 instruction cache, queued, and consequently get decoded into simpler, fixed-length µOPs. The alternative and much more desired path is the µOPs cache path whereby a cache containing already decoded µOPs receives a hit allowing the µOPs to be sent directly to the decode queue.

Regardless of which path an instruction ends up taking it will eventually arrive at the decode queue. The IDQ represents the end of the front-end and the in-order part of the machine and the start of the execution engine which operates out-of-order.

In the back-end, the micro-operations visit the reorder buffer. It's there where register allocation, renaming, and retiring takes place. At this stage a number of other optimizations are also done. From the reorder buffer, µOPs are sent to the unified scheduler. The scheduler has a number of exit ports, each wired to a set of different execution units. Some units can perform basic ALU operations, others can do multiplication and division, with some units capable of more complex operations such as various vector operations. The scheduler is effectively in charge of queuing the µOPs on the appropriate port so they can be executed by the appropriate unit.

Some µOPs deal with memory access (e.g. load & store). Those will be sent on dedicated scheduler ports that can perform those memory operations. Store operations go to the store buffer which is also capable of performing forwarding when needed. Likewise, Load operations come from the load buffer. Sunny Cove features a dedicated 48 KiB level 1 data cache and a dedicated 32 KiB level 1 instruction cache. It also features a core-private 512 KiB L2 cache that is shared by both of the L1 caches.

Each core enjoys a slice of a third level of cache that is shared by all the core. For Ice Lake (Client) which incorporates Sunny Cove cores, there are either two cores or four cores connected together on a single chip.

Die

Core


ice lake die core.png


ice lake die core (annotated).png

Core group


ice lake die core group.png


ice lake die core group (annotated).png

Bibliography

  • Intel Architecture Day 2018, December 11, 2018