From WikiChip
Aurora - Supercomputers
< supercomputers
Revision as of 12:15, 30 October 2025 by 95.24.58.112 (talk) (add links)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Edit Values
Aurora
aurora-sc.jpg
General Info
SponsorsU.S. Department of Energy
DesignersIntel,
Cray
OperatorsArgonne National Laboratory
Introduction2023
Peak FLOPS1,980 petaFLOPS
Price$600,000,000

Aurora is a most powerful state-of-the-art exascale supercomputer designed by Intel/Cray for the U.S. Department of Energy (DoE) Argonne Leadership Computing Facility/Argonne National Laboratory (ALCF/ANL).

The system is expected to become the first supercomputer in the United States to break the exaFLOPS barrier.

Overview[edit]

With a price tag of around $600 million, Aurora is slated for 2023 and will reach a peak performance of over 1 exaFLOPS.

The computer will feature a future Xeon processor along with a 7 nm-based Intel Xe GPGPU.
  • Aurora (2023) - HPE Cray EX ° 2025/06 (3) ^ (TOP3!)
Intel Exascale Compute Blade,
Xeon CPU Max 9470 52C 2.4 GHz,
Intel Data Center GPU Max, Slingshot-11
• Cores • Rmax (GFlop/s) • Rpeak (GFlop/s)
• 9,264,128 • 1,012,000,000 • 1,980,006,000

History[edit]

Originally announced in April 2015, Aurora was planned to be delivered in 2018 and have a peak performance of 180 petaFLOPS. The system was expected to be the world's most powerful system at the time. The system was intended to be built by Cray based on Intel's 3rd generation Xeon Phi (Knights Hill microarchitecture).

In November 2017 Intel announced that Aurora has been shifted to 2021 and will be scaled up to 1 exaFLOPS. The system will likely become the first supercomputer in the United States to break the exaFLOPS barrier. As part of the announcement Knights Hill was canceled and instead be replaced by a "new platform and new microarchitecture specifically designed for exascale".

Original Specifications[edit]

Original Specs
Model BeBop / Theta (2017) Aurora (2023)
Computing Power ~180 (?) PFLOPS 1,980 PFLOPS
Compute Nodes >50,000
Processor Intel Xeon Phi 7230 64C @1.3GHz, Cray CS400
3rd Generation Intel Xeon Phi (Knights Hill)
Intel Xeon CPU Max 9470 52C @2.4GHz, Cray EX
Interconnect
(Fabric)
2nd Generation Intel Omni-Path Architecture
with silicon photonics / Aries Interconnect
Intel Exascale Compute Blade,
Intel Data Center GPU Max, Slingshot-11
Cores 46,720 / 280,320 9,264,128
Linpack Performance (Rmax) 1.07 / 6.92 PFLOPS 1,012 PFLOPS
Theoretical Peak (Rpeak) 1.75 / 11.66 PFLOPS 1,980 PFLOPS
Nmax 3,408,706 / 7,680,000 28,773,888
HPCG 5,612 TFLOPS
Memory >7 PB DRAM and persistent memory
File System Intel Lustre* File System
File System Capacity >150 Petabytes
File System Throughput >1 Terabyte/s
Peak Power 13,000 kW (?) / 1,087 kW 38,700 kW
FLOP/s Per Watt >13 GFLOP/s per watt
Facility Area ~3,000 sq. ft.

Models[edit]

SupercomputersDoE • ANL

TOP500[edit]

DOE/SC/Argonne National Laboratory
URL:	http://www.anl.gov/
Segment	Research
City:	Argonne
Country/Region:	United States

System	Year	Vendor	• Cores • Rmax (GFlop/s)  • Rpeak (GFlop/s)

*Aurora - HPE Cray EX - Intel Exascale Compute Blade, Xeon CPU Max 9470 52C 2.4 GHz, 
:Intel Data Center GPU Max, Slingshot-11 ° 2025/06 (3) ^
:2023	Intel	• 9,264,128	• 1,012,000,000   • 1,980,006,000
*Improv - PowerEdge R6525, AMD EPYC 7713 64C 2.0 GHz, Infiniband
:2023	DELL	• 105,600	• 2,509,700	• 3,379,200
*Polaris - Apollo 6500, AMD EPYC 7532 32C 2.4 GHz, NVIDIA A100 SXM4 40 GB, Slingshot-10
:2021	HPE	• 256,592	• 25,810,000	• 34,163,190
*Theta - Cray XC40, Intel Xeon Phi 7230 64C 1.3 GHz, Aries interconnect
:2017  Cray/HPE • 280,320	• 6,920,900	• 11,661,312
*BeBop - Cray CS400, Intel Xeon Phi 7230 64C 1.3 GHz/Xeon E5-2695v4, Intel Omni-Path
:2017 Cray/HPE  • 46,720	• 1,070,590	• 1,750,016
*Cooley - Cray CS300-AC, Xeon E5-2620v3 6C 2.4 GHz, Infiniband FDR, Nvidia K80
:2015 Cray/HPE	• 4,788 	• 240,400	• 293,722
*Mira - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom
:2012	IBM	• 786,432	• 8,586,612	• 10,066,330
*Cetus - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom Interconnect
:2012	IBM	• 65,536	• 715,551	• 838,861
*Vesta - BlueGene/Q, Power BQC 16C 1.60 GHz, Custom
:2012	IBM	• 32,768	• 357,776	• 419,430.4
*Blues - Appro Xtreme-X Supercomputer, Xeon E5-2670 8C 2.60 GHz, Infiniband QDR
:2012 Cray/HPE	• 5,184 	• 81,820	• 107,827
*Magellan - iDataPlex, Xeon X55xx QC 2.66 GHz, Infiniband
:2010	IBM	• 4,032 	• 38,659.3	• 42,997.3
*Jazz 2 - iDataPlex, Xeon E55xx QC 2.53 GHz, Infiniband
:2009	IBM	• 2,640 	• 22,333.9	• 26,716.8
*Blue Gene/P Solution
:2008	IBM	• 4,096 	• 11,710	• 13,926.4
*Intrepid - Blue Gene/P Solution
:2007	IBM	• 163,840	• 458,611	• 557,056
*eServer - Blue Gene Solution
:2005	IBM	• 2,048 	• 4,713 	• 5,734
*Jazz - LCRC Xeon 2.4 GHz - Myrinet
:2002	Linux Networx	• 361	• 1,007 	• 1,732.8

See also[edit]

Documents[edit]

Original System[edit]

External links[edit]

References[edit]

designerIntel + and Cray +
introductory date2023 +
main imageFile:aurora-sc.jpg +
nameAurora +
operatorArgonne Leadership Computing Facility + and Argonne National Laboratory +
peak flops (double-precision)1.98e+18 FLOPS (1.98e+15 KFLOPS, 1,980,000,000,000 MFLOPS, 1,980,000,000 GFLOPS, 1,980,000 TFLOPS, 1,980 PFLOPS, 1.98 EFLOPS, 0.00198 ZFLOPS) +
release price$ 600,000,000.00 (€ 540,000,000.00, £ 486,000,000.00, ¥ 61,998,000,000.00) +
sponsorU.S. Department of Energy (DoE) +