From WikiChip
Difference between revisions of "nvidia/microarchitectures/denver"
< nvidia

(some denver 2 armv8 features from parker trm)
(Architecture: some bits from parker trm)
Line 26: Line 26:
  
 
== Architecture ==
 
== Architecture ==
Denver is 7-wide in-order superscalar. It has ARMv8 hardware decoder which can generate up to 2 micro-ops per cycle. Also it can execute up to 7 micro-ops per-cycle directly from L1i cache. Denver has 7 execution units: 1 branch, 2 integer (1 has hardware multiply module), 2 FP/NEON (128-bit), 2 Load/Store units.
+
Denver is 7-wide in-order superscalar. It has ARMv8 hardware decoder (A32, T32, and A64 modes) which can generate up to 2 micro-ops per cycle. Also it can execute up to 7 micro-ops per-cycle directly from L1i cache. Denver has 7 execution units: 1 branch, 2 integer (1 has hardware multiply module), 2 FP/NEON (128-bit), 2 Load/Store units.
  
Denver 2 has dynamic branch prediction with Branch Target Buffer and Global History Buffer. It also has return stack buffer and indirect predictor.
+
Denver 2 has dynamic branch prediction with Branch Target Buffer and Global History Buffer (Conditional Direction Predictor - gshare-agree). It also has Return Stack Buffer, Indirect Target Predictor and static predictor.
  
 
Pipeline of Denver 1 has 15 stages, mispredict penalty is 13 cycles.
 
Pipeline of Denver 1 has 15 stages, mispredict penalty is 13 cycles.
Line 40: Line 40:
  
 
=== Dynamic Code Optimization ===
 
=== Dynamic Code Optimization ===
For often executed code optimization micro-interrupt can be generated and firmware-based optimizer is started. Using "Dynamic Profile Information" optimizer can translate ARMv8 instructions into optimized microcode sequence and save it into Optimization Cache. Then Denver will execute code directly from Optimization Cache without using hardware ARMv8 decoder. Several microcode sequences may be chained
+
For often executed code optimization micro-interrupt can be generated and firmware-based optimizer is started. Using "Dynamic Profile Information" optimizer can translate ARMv8 instructions into optimized microcode sequence and save it into Optimization Cache. Then Denver will execute code directly from Optimization Cache (part of 128 MiB microcode carve-out) without using hardware ARMv8 decoder. Several microcode sequences may be chained.
  
 
In 2014 Nvidia listed several optimizations for the dynamic code translation:
 
In 2014 Nvidia listed several optimizations for the dynamic code translation:

Revision as of 18:16, 18 June 2018

Edit Values
Denver µarch
General Info
Arch TypeCPU
DesignerNvidia
ManufacturerTSMC
Introduction2014
Process28 nm, 16 nm
Core Configs2
Pipeline
TypeSuperscalar
OoOENo
Decode2-way
Instructions
ISAARMv8
Cache
L1I Cache128 KiB/core
4-way set associative
L1D Cache64 KiB/core
4-way set associative
L2 Cache2 MiB/cluster
16-way set associative

Denver is a CPU microarchitecture from Nvidia introduced in 2014, capable of executing ARMv8 code natively and with help of dynamic code optimization. Native ARM decoder can issue up to 2 instructions per cycle, and up to 7 micro-operations are started per cycle when dynamic code translation is used.

Architecture

Denver is 7-wide in-order superscalar. It has ARMv8 hardware decoder (A32, T32, and A64 modes) which can generate up to 2 micro-ops per cycle. Also it can execute up to 7 micro-ops per-cycle directly from L1i cache. Denver has 7 execution units: 1 branch, 2 integer (1 has hardware multiply module), 2 FP/NEON (128-bit), 2 Load/Store units.

Denver 2 has dynamic branch prediction with Branch Target Buffer and Global History Buffer (Conditional Direction Predictor - gshare-agree). It also has Return Stack Buffer, Indirect Target Predictor and static predictor.

Pipeline of Denver 1 has 15 stages, mispredict penalty is 13 cycles.

Stage name: IP1 IC2 IW3 IN4 IN5 SB1 SB2 EB0 EB1 EA2 ED3 EL4 EE5 ES6 EW7
Stage action: ITLB I$ Rd Way Sel Decode Fetch Q Pick Sched RF Rd Bypass Ld Addr D$ Read Bypass ALU/Execute St Addr RF wr

Dynamic Code Optimization

For often executed code optimization micro-interrupt can be generated and firmware-based optimizer is started. Using "Dynamic Profile Information" optimizer can translate ARMv8 instructions into optimized microcode sequence and save it into Optimization Cache. Then Denver will execute code directly from Optimization Cache (part of 128 MiB microcode carve-out) without using hardware ARMv8 decoder. Several microcode sequences may be chained.

In 2014 Nvidia listed several optimizations for the dynamic code translation:

  • Unrolls Loops
  • Renames registers
  • Reorders Loads and Stores
  • Improves control flow
  • Removes unused computation
  • Hoists redundant computation
  • Sinks uncommonly executed computation
  • Improves scheduling

Cache

For two cores of Denver total cache size is:

[Edit/Modify Cache Info]

hierarchy icon.svg
Cache Organization
Cache is a hardware component containing a relatively small and extremely fast memory designed to speed up the performance of a CPU by preparing ahead of time the data it needs to read from a relatively slower medium such as main memory.

The organization and amount of cache can have a large impact on the performance, power consumption, die size, and consequently cost of the IC.

Cache is specified by its size, number of sets, associativity, block size, sub-block size, and fetch and write-back policies.

Note: All units are in kibibytes and mebibytes.
L1$284 KiB
290,816 B
0.277 MiB
L1I$256 KiB
262,144 B
0.25 MiB
2x128 KiB4-way set associative 
L1D$128KiB
131,072 B
0.125 MiB
2x64 KiB4-way set associative 

L2$2 MiB
2,048 KiB
2,097,152 B
0.00195 GiB
  1x2 MiB16-way set associative 

L1i TLB has 128 entries for 4 KiB pages and is 4-way set-associative. L1d TLB has 280 entries and supports 4 KiB, 64 KiB, 1 MiB and 2 MiB pages. TLB walk is accelerated by L2 TLB of 2048 entries in 4-way set-associative buffer.

Features

[Edit/Modify Supported Features]

Cog-icon-grey.svg
Supported ARM Extensions & Processor Features
Thumb-2Thumb-2 Extension
PMUv3ARMv8 PMUv3 Performance Monitors Extension
CryptoCryptographic Extension

Products

Denver is used in Nvidia's Tegra K1-64 (2014, 28 nm, model T132). It is used in Google's Nexus 9 tablet, produced by HTC.

Denver 2 is used in Nvidia's Terga X2 "Parker" (2016, 16 nm, model T186). Parker SoC has 4 Cortex-A57 cores and two Denver-2 cores. It is used in Nvidia Drive PX2 and Nvidia Jetson TX2.

Die

All Denver Chips

 List of all Denver Chips
 Main processorIGP
ModelLaunchedDesignerFamilyProcessCoreCTL2$L3$FrequencyMax MemDesignerNameFrequency
Count: 0


References

codenameDenver +
core count2 +
designerNvidia +
first launched2014 +
full page namenvidia/microarchitectures/denver +
instance ofmicroarchitecture +
instruction set architectureARMv8 +
l1$ size284 KiB (290,816 B, 0.277 MiB) +
l1d$ description4-way set associative +
l1d$ size128 KiB (131,072 B, 0.125 MiB) +
l1i$ description4-way set associative +
l1i$ size256 KiB (262,144 B, 0.25 MiB) +
l2$ description16-way set associative +
l2$ size2 MiB (2,048 KiB, 2,097,152 B, 0.00195 GiB) +
manufacturerTSMC +
microarchitecture typeCPU +
nameDenver +
process28 nm (0.028 μm, 2.8e-5 mm) + and 16 nm (0.016 μm, 1.6e-5 mm) +