From WikiChip
Editing supercomputers/astra

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

This page supports semantic in-text annotations (e.g. "[[Is specified as::World Heritage Site]]") to build structured and queryable content provided by Semantic MediaWiki. For a comprehensive description on how to use annotations or the #ask parser function, please have a look at the getting started, in-text annotation, or inline queries help pages.

Latest revision Your text
Line 1: Line 1:
{{sc title|Astra}}
+
{{sc title|Astra}}[[File:astra supercomputer illustration.png|thumb|right|class=wikichip_ogimage|Astra Illustration]]
{{supercomputer
+
'''Astra''' is a [[petascale]] [[ARM]] supercomputer designed for [[Sandia National Laboratories]] expeced to be deployed in mid-[[2018]]. This is the first ARM-based supercomputer to exceed 1 [[petaFLOPS]].
|name=Astra
 
|image=astra supercomputer illustration.png
 
|sponsor=U.S. Department of Energy
 
|designer=Cavium
 
|operator=Sandia National Laboratories
 
|introduction=2018
 
|peak dpflops=2.322 petaFLOPS
 
}}
 
'''Astra''' is a [[petascale]] [[ARM]] [[supercomputer]] operating by the [[DoE]] [[Sandia National Laboratories]]. Officially launching in late 2018, Astra became the first ARM-based supercomputer to exceed 1 [[petaFLOPS]] and the first [[ARM]]-based system to enter the [[TOP500]] list.
 
  
 
== History ==
 
== History ==
Line 23: Line 14:
 
<tr><th>Processors</th><td>5,184<br>2 x 72 x 36</td><td>&nbsp;</td><th>Type</th><td>[[DDR4]]</td><td>[[NVMe]]</td></tr>
 
<tr><th>Processors</th><td>5,184<br>2 x 72 x 36</td><td>&nbsp;</td><th>Type</th><td>[[DDR4]]</td><td>[[NVMe]]</td></tr>
 
<tr><th>Racks</th><td>36</td><td>&nbsp;</td><th>Node</th><td>128 GiB</td><td>?</td></tr>
 
<tr><th>Racks</th><td>36</td><td>&nbsp;</td><th>Node</th><td>128 GiB</td><td>?</td></tr>
<tr><th>Peak FLOPS</th><td>2.322 petaFLOPS (DP)<br>4.644 petaFLOPS (SP)</td><td>&nbsp;</td><th>Astra</th><td>324 TiB</td><td>403 TB</td></tr>
+
<tr><th>Peak FLOPS</th><td>2.322 petaFLOPS (SP)<br>1.161 petaFLOPS (DP)</td><td>&nbsp;</td><th>Astra</th><td>324 TiB</td><td>403 TB</td></tr>
 
</table>
 
</table>
  
Line 35: Line 26:
  
 
:[[File:astra 540-port switch.svg|thumb|right|540-port Switch]]
 
:[[File:astra 540-port switch.svg|thumb|right|540-port Switch]]
 
+
Servers are linked via Mellanox IB EDR interconnect in a three-level fat tree topology with a 2:1 tapered fat-tree at L1. Astra uses three 540-port switches. Those are formed from 30 level 2 switches that provide 18 ports each (540 in total) with the remaining 18 links going for each of the 18 level 3 switches. The system has a peak Wall power of 1.6 MW.
The system has a peak Wall power of 1.6 MW.
 
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 46: Line 36:
 
| 1,631.5 || 1,440.3 || 1,357.3 || 274.9
 
| 1,631.5 || 1,440.3 || 1,357.3 || 274.9
 
|}
 
|}
 
Servers are linked via Mellanox IB EDR interconnect in a three-level fat tree topology with a 2:1 tapered fat-tree at L1. Astra uses three 540-port switches. Those are formed from 30 level 2 switches that provide 18 ports each (540 in total) with the remaining 18 links going for each of the 18 level 3 switches.
 
 
 
:[[File:astra rack connection l1 to l2.svg|650px]]
 
  
 
=== Compute Rack ===
 
=== Compute Rack ===
Line 63: Line 48:
 
<tr><th>Processors</th><td>144<br>72 × 2 × CPU</td></tr>
 
<tr><th>Processors</th><td>144<br>72 × 2 × CPU</td></tr>
 
<tr><th>Core</th><td>4,032 (16,128 threads)<br>72 × 56 (224 threads)</td></tr>
 
<tr><th>Core</th><td>4,032 (16,128 threads)<br>72 × 56 (224 threads)</td></tr>
<tr><th>FLOPS (SP)</th><td>129 TFLOPS<br>72 × 2 × 28 × 32 GFLOPS</td></tr>
+
<tr><th>FLOPS (SP)</th><td>64.51 TFLOPS<br>72 × 2 × 28 × 16 GFLOPS</td></tr>
<tr><th>FLOPS (DP)</th><td>64.51 TFLOPS<br>72 × 2 × 28 × 16 GFLOPS</td></tr>
+
<tr><th>FLOPS (DP)</th><td>32.26 TFLOPS<br>72 × 2 × 28 × 8 GFLOPS</td></tr>
 
<tr><th>Memory</th><td>9 TiB (DRR4)<br>72 × 2 × 8 × 8 GiB</td></tr>
 
<tr><th>Memory</th><td>9 TiB (DRR4)<br>72 × 2 × 8 × 8 GiB</td></tr>
 
<tr><th>Memory BW</th><td>24.57 TB/s<br>72 × 16 × 21.33 GB/s</td></tr>
 
<tr><th>Memory BW</th><td>24.57 TB/s<br>72 × 16 × 21.33 GB/s</td></tr>
Line 73: Line 58:
 
=== Compute Node ===
 
=== Compute Node ===
 
[[File:apollo 70 system.png|right|thumb|Apollo 70]]
 
[[File:apollo 70 system.png|right|thumb|Apollo 70]]
The basic compute server is the HPE Apollo 70. Those use a highly dense chassis system architecture that fit in just 2U and consist of four dual-socket compute nodes.
+
The basic compute server is the HPE Apollo 70. Those use a highly dense chassis system architecture that fit in just 2U and consist of four dual-socket compute nodes. Each node has two 1,600 W power supplies, 1 Gbps Ethernet management port, and a Mellanox ConnectX-5 EDR link. Each node has a dual-socket [[Cavium]] {{cavium|ThunderX2}} [[ThunderX2 CN9975]] ({{cavium|Vulcan|l=arch}}) processor with 28 cores operating at 2 GHz. For Astra, Sandia is using 28-core parts operating at 2 GHz, likely due to a better performance/power efficiency design point. Each chip supports up to eight channels of DDR4 DIMMs with rates up to 2666 MT/s as well as 56 PCIe 3 lanes.
 
 
:[[File:astra apollo 70 node.jpg|500px]]
 
 
 
Each node has two 1,600 W power supplies, 1 Gbps Ethernet management port, and a Mellanox ConnectX-5 EDR link. Each node has a dual-socket [[Cavium]] {{cavium|ThunderX2}} [[ThunderX2 CN9975]] ({{cavium|Vulcan|l=arch}}) processor with 28 cores operating at 2 GHz. For Astra, Sandia is using 28-core parts operating at 2 GHz, likely due to a better performance/power efficiency design point. Each chip supports up to eight channels of DDR4 DIMMs with rates up to 2666 MT/s as well as 56 PCIe 3 lanes.
 
  
 
:[[File:astra node diagram.svg|600px]]
 
:[[File:astra node diagram.svg|600px]]
Line 90: Line 71:
 
<tr><th>Processors</th><td>1 × CPU</td><td>2 × CPU</td></tr>
 
<tr><th>Processors</th><td>1 × CPU</td><td>2 × CPU</td></tr>
 
<tr><th>Core</th><td>28 (112 threads)</td><td>56 (224 threads)</td></tr>
 
<tr><th>Core</th><td>28 (112 threads)</td><td>56 (224 threads)</td></tr>
<tr><th>FLOPS (SP)</th><td>896 GFLOPS<br>28 × 32 GFLOPS</td><td>1,792 GFLOPS<br>2 × 28 × 32 GFLOPS</td></tr>
+
<tr><th>FLOPS (SP)</th><td>448 GFLOPS<br>28 × 16 GFLOPS</td><td>896 GFLOPS<br>2 × 28 × 16 GFLOPS</td></tr>
<tr><th>FLOPS (DP)</th><td>448 GFLOPS<br>28 × 16 GFLOPS</td><td>896 GFLOPS<br>2 × 28 × 16 GFLOPS</td></tr>
+
<tr><th>FLOPS (SP)</th><td>224 GFLOPS<br>28 × 16 GFLOPS</td><td>448 GFLOPS<br>2 × 28 × 8 GFLOPS</td></tr>
 
<tr><th>Memory</th><td>64 GiB (DRR4)<br>8 × 8 GiB</td><td>128 GiB (DRR4)<br>2 × 8 × 8 GiB</td></tr>
 
<tr><th>Memory</th><td>64 GiB (DRR4)<br>8 × 8 GiB</td><td>128 GiB (DRR4)<br>2 × 8 × 8 GiB</td></tr>
 
<tr><th>Bandwidth</th><td>170.7 GB/s<br>8 × 21.33 GB/s</td><td>341.33 GB/s<br>16 × 21.33 GB/s</td></tr>
 
<tr><th>Bandwidth</th><td>170.7 GB/s<br>8 × 21.33 GB/s</td><td>341.33 GB/s<br>16 × 21.33 GB/s</td></tr>
 
</table>
 
</table>
 
== Power & Cooling ==
 
Astra has a nominal power consumption of 1.35 MW under [[LINPACK]]. Cooling the entire system are just 12 MCS-300 fan coil racks.
 
{| class="wikitable"
 
! colspan="11" | Projected power of the system by component
 
|-
 
! colspan="5" | Per constituent rack type (W) !! !! colspan="5" | Total (kW)
 
|-
 
! Rack || Wall !! Peak !! Nominal ([[LINPACK]]) !! Idle !! !! Racks !! style="background-color: yellow" |  '''Wall''' !! Peak !! style="background-color: yellow" |  '''Nominal (LINPACK)''' !! idle
 
|-
 
| Compute || 39,888 || 35,993 || 33,805 || 6,761 || || 36 || 1436.0 || 1295.8 || 1217.0 || 243.4
 
|-
 
| MCS-300 || 10,500 || 7,400 || 7,400 || 170 || || 12 || 126.0 || 88.8 || 88.8 || 2.0
 
|-
 
| Network || 12,624 || 10,023 || 9,021 || 9,021 || || 3 || 37.9 || 30.1 || 27.1 || 27.1
 
|-
 
| Storage || 11,520 || 10,000 || 10,000 || 1,000 || || 2 || 23.0 || 20.0 || 20.0 || 2.0
 
|-
 
| Utility || 8,640 || 5,625 || 4,500 || 450 || || 1 || 8.6 || 5.6 || 4.5 || 0.5
 
|-
 
| || || || || || || || style="background-color: yellow" | '''1,631.5''' || 1,440.3 || style="background-color: yellow" | '''1,357.3''' || 274.9
 
|}
 
  
 
== Bibliography ==
 
== Bibliography ==
 
* SNL (personal communication, August 2018).
 
* SNL (personal communication, August 2018).
 
* DOE. (June 18, 2018). "''[https://share-ng.sandia.gov/news/resources/news_releases/arm_supercomputer/ Arm-based supercomputer prototype to be deployed at Sandia National Laboratories]''" [Press Release]
 
* DOE. (June 18, 2018). "''[https://share-ng.sandia.gov/news/resources/news_releases/arm_supercomputer/ Arm-based supercomputer prototype to be deployed at Sandia National Laboratories]''" [Press Release]
* Kevin Pedretti, Jim H. Laros III, Si Hammond. (June 28, 2018). ''"ISC 2018"''. "''Vanguard Astra: Maturing the ARM Software Ecosystem for U.S. DOE/ASC Supercomputing''"
+
* Kevin Pedretti, Jim H. Laros III, Si Hammond. (June 28, 2018). "''Vanguard Astra: Maturing the ARM Software Ecosystem for U.S. DOE/ASC Supercomputing''"
* Schor, David. (August, 2018). "''[https://fuse.wikichip.org/news/1583/cavium-takes-arm-to-petascale-with-astra/ Cavium Takes ARM to Petascale with Astra]''"
+
 
  
 
[[category:supercomputers]]
 
[[category:supercomputers]]

Please note that all contributions to WikiChip may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see WikiChip:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)
designerCavium +
introductory date2018 +
main imageFile:astra supercomputer illustration.png +
nameAstra +
operatorSandia National Laboratories +
peak flops (double-precision)2.322e+15 FLOPS (2,322,000,000,000 KFLOPS, 2,322,000,000 MFLOPS, 2,322,000 GFLOPS, 2,322 TFLOPS, 2.322 PFLOPS, 0.00232 EFLOPS, 2.322e-6 ZFLOPS) +
sponsorUnited States Department of Energy (DOE) +