From WikiChip
Editing nec/microarchitectures/sx-aurora

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 42: Line 42:
 
** 3x [[FLOPs]]/cycle (192 FLOPs/cycle, up from 64 FLOPs/cycle)
 
** 3x [[FLOPs]]/cycle (192 FLOPs/cycle, up from 64 FLOPs/cycle)
 
* Memory
 
* Memory
** Removed assignable data buffer (ADB)
+
** 16 MiB L3 [[LLC]]
*** Added 2 MiB L3 [[LLC]] slice per core
 
 
** 6x [[HBM2]] (from 12x [[DDR3]])
 
** 6x [[HBM2]] (from 12x [[DDR3]])
 
*** 4.7x memory bandwidth (1.2 TB/s, up from 256 GB/s)
 
*** 4.7x memory bandwidth (1.2 TB/s, up from 256 GB/s)
Line 50: Line 49:
 
== Block Diagram ==
 
== Block Diagram ==
 
=== Entire SoC ===
 
=== Entire SoC ===
:[[File:sx-aurora block diagram.svg|600px]]
+
:[[File:sx-aurora block diagram.svg|850px]]
  
 
=== Vector core ===
 
=== Vector core ===
:[[File:sx-aurora vector core block diagram.svg|700px]]
+
:[[File:sx-aurora vector core block diagram.svg|1200px]]
  
 
== Memory Hierarchy ==
 
== Memory Hierarchy ==
Line 80: Line 79:
  
 
== Overview ==
 
== Overview ==
[[File:sx-aurora overview.svg|thumb|right|350px|Overview of the SX-Aurora|class=wikichip_ogimage]]
+
[[File:sx-aurora overview.svg|thumb|right|400px|Overview of the SX-Aurora]]
 
The SX-Aurora is [[NEC]]'s successor to the {{\\|SX-ACE}}, a [[vector processor]] designed for [[high-performance]] scientific/research applications and supercomputers. The SX-Aurora deviates from all prior chips in the kind of markets it's designed to address. Therefore, NEC made slightly different design choice compared to prior generations of vector processors. In an attempt to broaden their market, NEC extended beyond supercomputers to the conventional server and workstation market. This is done through the use of [[PCIe]]-based [[accelerator cards]].
 
The SX-Aurora is [[NEC]]'s successor to the {{\\|SX-ACE}}, a [[vector processor]] designed for [[high-performance]] scientific/research applications and supercomputers. The SX-Aurora deviates from all prior chips in the kind of markets it's designed to address. Therefore, NEC made slightly different design choice compared to prior generations of vector processors. In an attempt to broaden their market, NEC extended beyond supercomputers to the conventional server and workstation market. This is done through the use of [[PCIe]]-based [[accelerator cards]].
  
Line 107: Line 106:
 
=== Vector processing unit ===
 
=== Vector processing unit ===
 
[[File:sx-aurora-vpu.svg|thumb|right|vector processing unit (VPU) and 32 VPPs|400px]]
 
[[File:sx-aurora-vpu.svg|thumb|right|vector processing unit (VPU) and 32 VPPs|400px]]
The bulk of the compute work is done on the vector processing unit (VPU). The VPU has a fairly simple pipeline, though it does employ [[out-of-order scheduling]]. [[Instructions]] issued by the SPU are sent to the [[instruction buffer]] where they await renaming, reordering, and scheduling. NEC renames the 64 [[architectural registers|architectural vector registers]] (VRs) onto 256 [[physical registers]]. There is support for enhanced preloading and avoids [[WAR]]/[[WAW]] dependencies. Scheduling is relatively simple. There is a dedicated pipeline for complex operations. Things such as vector summation, division, mask [[population count]], are sent to this execution unit. The dedicate execution unit for complex operations is there to prevent stalls due to the high latency involved in those operations.
+
The bulk of the compute work is done on the vector processing unit (VPU). The VPU has a fairly simple pipeline, though it does employ [[out-of-order scheduling]]. [[Instructions]] issued by the SPU are sent to the [[instruction buffer]] where they await renaming, reordering, and scheduling. NEC renames the 64 vector registers (VRs) into 256 physical registers. There is support for enhanced preloading and avoids [[WAR]]/[[WAW]] dependencies. Scheduling is relatively simple. There is a dedicated pipeline for complex operations. Things such as vector summation, division, mask [[population count]], are sent to this execution unit. The dedicate execution unit for complex operations is there to prevent stalls due to the high latency involved in those operations.
  
The majority of the operations are handled by the vector parallel pipeline (VPP). The SX-Aurora doubles the number of VPPs per VPU from the {{\\|SX-ACE}}. Each VPU now has 32 VPPs - all identical. Note that all of the control logic described before are outside of the VPP which is relatively a simple block of vector execution. The VPP has an eight-port vector register, 16 mask registers, and six execution pipes, and a set of forwarding logic between them.
+
The majority of the operations are handled by the vector parallel pipeline (VPP). The SX-Aurora doubles the number of VPPs per VPU from the SX-ACE. Each VPU now has 32 VPPs - all identical. Note that all of the control logic described before are outside of the VPP which is relatively a simple block of vector execution. The VPP has an eight-port vector register, 16 mask registers, and six execution pipes, and a set of forwarding logic between them.
  
The six execution pipes include three [[floating-point]] pipes, two integer [[ALU]]s, and a complex and store pipe for data output. Note that ALU1 and the Store pipe share the same read ports. Likewise, FMA2 and ALU0 share a read port. All in all, the effective number of pipelines executing each cycle is actually four. Compared to the {{\\|SX-Ace}}, the SX-Aurora now has one extra FMA unit per VPP. The VPP is designed such that all three FMAs can execute each cycle – each one can be independently operated by a different vector instruction. Every FMA unit is 64-bit wide and can support narrower packed operation such as 32-bit for double the peak theoretical performance. NEC's SX architecture has a very wide vector length the size of 256 elements with each element being 8 bytes (i.e., 2 KiB). Therefore a single vector operation requires eight cycles to complete by a single FMA pipeline across all 32 VPPS.
+
The six execution pipes include three [[floating-point]] pipes, two integer [[ALU]]s, and a complex and store pipe for data output. Note that ALU1 and the Store pipe share the same read ports. Likewise, FMA2 and ALU0 share a read port. All in all, the effective number of pipelines executing each cycle is actually four. Compared to the SX-Ace, the SX-Aurora now has one extra FMA unit per VPP.
  
 
The peak theoretical performance that can be achieved is 3 FMAs per VPP per cycle. With 32 VPPs per VPU, there are a total of 96 FMAs/cycle for a total of 192 DP FLOPs/cycle. With a peak frequency of 1.6 GHz for the SX-Aurora Tsubasa vector processor, each VPU has a peak performance of 307.2 [[gigaFLOPS]]. Each FMA can perform operations on packed data types. That is, the single-precision floating-point is doubled through the packing of 2 32-bit elements for a peak performance of 614.4 [[gigaFLOPS]].
 
The peak theoretical performance that can be achieved is 3 FMAs per VPP per cycle. With 32 VPPs per VPU, there are a total of 96 FMAs/cycle for a total of 192 DP FLOPs/cycle. With a peak frequency of 1.6 GHz for the SX-Aurora Tsubasa vector processor, each VPU has a peak performance of 307.2 [[gigaFLOPS]]. Each FMA can perform operations on packed data types. That is, the single-precision floating-point is doubled through the packing of 2 32-bit elements for a peak performance of 614.4 [[gigaFLOPS]].
  
 
== Memory subsystem ==
 
== Memory subsystem ==
Maintaining a high [[bytes Per FLOP]] is important for vector operations that rely on large data sets. With over five times the FLOPS per core, the SX-Aurora had to significantly improve the memory subsystem to prevent workloads from memory bottlenecking, thereby preventing them from reaching the peak compute power of the chip. The {{\\|SX-Ace}} reached 256 GB/s of [[memory bandwidth]] using a whopping 16 channels of DDR3 memory. It becomes impossible to increase this further to bring a sufficiently large improvement in bandwidth. For this reason, NEC opted to use [[HBM2]] memory instead. The SX-Aurora has six HBM2 modules delivering 1.22 TB/s of bandwidth, nearly 5-fold improvement over the {{\\|SX-Ace}}. However, despite the large memory bandwidth improvement, the SX-Aurora achieves 0.5 [[bytes/FLOPs]] which is half of the {{\\|SX-Ace}}.
+
Maintaining a high [[bytes Per FLOP]] is important for vector operations that rely on large data sets. With over five times the FLOPS per core, the SX-Aurora had to significantly improve the memory subsystem to prevent workloads from memory bottlenecking, preventing them from reaching the peak compute power of the chip. The {{\\|SX-Ace}} reached 256 GB/s of [[memory bandwidth]] using a whopping 16 channels of DDR3 memory. It becomes impossible to increase this further to bring a sufficiently large improvement in bandwidth. For this reason, NEC opted to use [[HBM2]] memory instead. The SX-Aurora has six HBM2 modules delivering 1.22 TB/s of bandwidth, nearly 5-fold improvement over the {{\\|SX-Ace}}. This provides the new [[last level cache]] with around 200 GB/s of bandwidth (1.6 TB/s of aggregated bandwidth).
  
The SX-Aurora got rid of the 1 MiB assignable data buffer (ADB) from the {{\\|SX-Ace}} and added a memory side cache designed to avoid snoop traffic. It's worth pointing out that the new LLC does retain an ADB-like feature whereby the priority of a [[cache line]] is controlled via a flag for vector memory access instructions. The caches are sliced into eight 2 MiB chunks which consist of 16 [[memory banks]] each for a total of 128 memory banks. The LLC is [[inclusive]] of both the [[L1]] and [[L2]]. The [[last level cache|LLC]] interfaces with the IMC at 200 GB/s per chunk (1600 TB/s in total) and those provide a memory bandwidth of 1.22 TB/s through it's 6 HBM2 modules.
+
The SX-Aurora features a 16-layer 2D mesh network. With the {{\\|SX-Ace}}, replies to the cores were slightly unbalanced. In the SX-Aurora, the requests crossbars and the reply crossbars, each, have the same bandwidth. With each request being 16B, this works out to around 410 GB/s for each crossbar or 820 GB/s per core. Coming from the cache size, there is a total of 3.05 GB/s of available bandwidth distributed across all eight LLC chunks.
  
[[File:sx-aurora memory subsystem.svg|700px|center]]
+
:[[File:sx-aurora memory subsystem.svg|thumb|700px|center|SX-Aurora Memory Subsystem.]]
 
 
=== B/FLOP ===
 
{{main|bytes_per_flop|l1=Bytes Per FLOP (B/F)}}
 
 
 
{|class="wikitable"
 
|-
 
! Type || HBM:SoC || LLC:NoC || NoC:Core
 
|-
 
| Rate || 0.5 B/FLOP || 1.22 B/FLOP || 2.67 B/FLOP
 
|}
 
  
 
== Mesh interconnect ==
 
== Mesh interconnect ==
All eight cores are interconnected using a 2D mesh network. The designed favored minimal wiring for maximum bandwidth. The SX-Aurora features a 16-layer 2D mesh network that uses [[dimension-ordered routing]] along with virtual channels for requests and replies. As illustrated in the diagram below, the router crossbar points are arranged in a diamond shape in order to minimize distance to the request and reply crossbars in the core. With the SX-Ace, replies to the cores were slightly unbalanced. In the SX-Aurora, the requests crossbars and the reply crossbars, each, have the same bandwidth. With each request being 16B, this works out to around 410 GB/s for each crossbar or 820 GB/s per core. Coming from the cache size, there is a total of 3.05 GB/s of available bandwidth distributed across all eight LLC chunks.
+
{{empty section}}
 
 
 
 
  
[[File:sx-aurora 2d 16-layer mesh.svg|900px|center]]
+
:[[File:sx-aurora 2d 16-layer mesh.svg|thumb|900px|center|SX-Aurora utilizes a 16-layer 2D mesh.]]
  
 
== NUMA Mode ==
 
== NUMA Mode ==
Line 218: Line 205:
 
* {{bib|hc|30|NEC}}
 
* {{bib|hc|30|NEC}}
 
* Supercomputing 2018, NEC Aurora Forum
 
* Supercomputing 2018, NEC Aurora Forum
* Supercomputing 2019, NEC
 
 
* ''Some information was obtained directly from NEC''
 
* ''Some information was obtained directly from NEC''

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)