- This robot vacuum has a side-mounted handheld vacuum and is $380 off for Black Friday
- This 2 TB Samsung 990 Pro M.2 SSD is on sale for $160 this Black Friday
- Buy Microsoft Visio Professional or Microsoft Project Professional 2024 for just $80
- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
Aurora enters TOP500 supercomputer ranking at No. 2 with a challenge for reigning champ Frontier
Frontier maintained its top spot in the latest edition of the TOP500 for the fourth consecutive time and is still the only exascale machine on the list of the world’s most powerful supercomputers. Newcomer Aurora debuted at No. 2 in the ranking, and it’s expected to surpass Frontier once the system is fully built.
Frontier, housed at the Oak Ridge National Laboratory (ORNL) in Tenn., landed the top spot with an HPL score of 1.194 quintillion floating point operations per second (FLOPS), which is the same score from earlier this year. A quintillion is 1018 or one exaFLOPS (EFLOPS). The speed measurement used in evaluating the computers is the High Performance Linpack (HPL) benchmark, which measures how well systems solve a dense system of linear equations.
Utilizing AMD EPYC 64C 2GHz processors, the Frontier system is based on the latest HPE Cray EX235a architecture and has a total of 8,699,904 combined CPU and GPU cores. Frontier also boasts a power efficiency rating of 52.59 GFlops/watt and relies on HPE’s Slingshot 11 network for data transfer.
Debuting in second place and bumping Fugaku from that spot is the new Aurora system. Housed at the Argonne Leadership Computing Facility in Ill., Aurora holds an HPL score of 585.34 petaFLOPS, considering a petaFLOP is 1015FLOPS. Aurora is built by Intel and is based on the HPE Cray EX-Intel Exascale Compute Blade, which uses Intel Xeon CPU Max Series processors and Intel Data Center GPU Max Series accelerators. These communicate through HPE’s Slingshot-11 network interconnect.
The TOP500 notes that Aurora’s numbers were submitted with a measurement of half of the planned final systems, meaning that it could exceed and displace Frontier from the lead position with a peak performance of 2 EFLOPS when finished.
Another disruptor to the list is a new system named Eagle, which landed in the third spot. Achieving the highest rank ever for a cloud system, Eagle is installed in the Microsoft Azure Cloud in the U.S. Eagles has an HPL score of 561.2 PFLOPS and is based on Intel Xeon Platinum 8480C processors and NVIDIO H100 accelerators.
Supercomputer Fugaku remains on the top 10 list but has fallen to the fourth spot for this edition. Fugaku had placed second earlier this year and held the number one position from June 2020 until November 2021. With an HPL score of 442.01 PFLOPS and located in Kobe, Japan, Fugaku continues to hold the title as the highest ranked systems outside of the U.S.
The LUMI system, previously in the top three, now holds the number five spot with an HPL score of 379.70 PFLOPS. This system is the largest in Europe and has seen multiple upgrades, keeping it near the top of the list. For this edition, LUMI improved from an HPL score of 309.10 PFLOPS since the last list.
This edition of the TOP500 also highlights a few trends. For instance, based on this top 10 list, Intel, AMD, and IBM processors seem to be the preferred choice for HPC systems.
“Out of the TOP10, five systems use Intel Xeon processors (Aurora, Eagle, Leonardo, MareNostrum 5 ACC, and EOS NVIDIA DGX SuperPod), two systems use AMD processors (Frontier and LUMI), and two systems use IBM processors (Summit and Sierra),” a press statement reads.
The list also shows that China and the U.S. earned the most entries on the entire list. The U.S. increased its lead from 150 machines on the previous list to 161 in November, and China dropped from 134 previously to 104 on the current list. On a larger scale, North America improved from 160 machines to 171 on the current tally, Asia decreased from 192 to 169 machines, and Europe grew from 133 to 143 systems.
Systems absent from this list since the previous edition include Sunway TaihuLight, Perlmutter, Selene, and Tianhe-2A (Milky Way-2A).
Here is a breakdown of specific details for the 10 overall fastest on the TOP500 list for November 2023:
#1: Frontier
This HPE Cray EX system is the first U.S. system with a performance exceeding one Exaflop/s. It is installed at the ORNL in Tenn., where it is operated for the Department of Energy (DOE).
- Cores: 8,699,904
- Rmax (PFLOPS): 1,194.00
- Rpeak (PFLOPS): 1,679.82
- Power (kW): 22,703
#2: Aurora
This new Intel system is based on HPE Cray EX – Intel Exascale Compute Blades. It is installed at the Argonne Leadership Computing Facility, Illinois, USA, where it is also operated for the Department of Energy (DOE).
- Cores: 4,742,808
- Rmax (PFLOPS): 585.34
- Rpeak (PFLOPS): 1059.33
- Power (kW): 24,687
#3: Eagle
The new Eagle system is installed by Microsoft in its Azure cloud. This Microsoft NDv5 system is based on Xeon Platinum 8480C processors and NVIDIA H100 accelerators.
- Cores: 1,123,200
- Rmax (PFLOPS): 561.20
- Rpeak (PFLOPS): 846.84
- Power (kW):
#4: Supercomputer Fugaku (previously #2)
Fugaku is installed at the RIKEN Center for Computational Science (R-CCS) in Kobe, Japan, and it has 7,630,848 cores.
- Cores: 7,630,848
- Rmax (PFLOPS): 442.01
- Rpeak (PFLOPS): 537.21
- Power (kW): 29,899
#5: LUMI (previously #3)
The upgraded LUMI system, another HPE Cray EX system, is installed at EuroHPC center at CSC in Finland. The European High-Performance Computing Joint Undertaking (EuroHPC JU) is pooling European resources to develop top-of-the-range Exascale supercomputers for processing big data.
- Cores: 2,752,704
- Rmax (PFLOPS): 379.70
- Rpeak (PFLOPS): 531.51
- Power (kW): 7,107
#6: Leonardo (previously #4)
Leonardo is installed at a different EuroHPC site in CINECA, Italy. It is an Atos BullSequana XH2000 system with Xeon Platinum 8358 32C 2.6GHz as main processors, NVIDIA A100 SXM4 40 GB as accelerators, and Quad-rail NVIDIA HDR100 Infiniband as interconnect.
- Cores: 1,824,768
- Rmax (PFLOPS): 238.70
- Rpeak (PFLOPS): 304.47
- Power (kW): 7,404
#7: Summit (previously #5)
Also at ORNL in Tenn., Summit has 4,356 nodes, each one housing two POWER9 CPUs with 22 cores each and six NVIDIA Tesla V100 GPUs each with 80 streaming multiprocessors (SM). The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.
- Cores: 2,414,592
- Rmax (PFLOPS): 148.60
- Rpeak (PFLOPS): 200.79
- Power (kW): 10,096
#8: MareNostrum 5 ACC (new to the list)
The MareNostrum 5 ACC system is new at No. 8 and installed at the EuroHPC/Barcelona Supercomputing Center in Spain. This BullSequana XH3000 system uses Xeon Platinum 8460Y processors with NVIDIA H100 and Infiniband NDR200.
- Cores: 680,960
- Rmax (PFLOPS): 138.20
- Rpeak (PFLOPS): 234.00
- Power (kW): 2,560
#9: Eos NVIDIA DGX SuperPOD (new to the list)
The new Eos system is based on the NVIDIA DGX H100 with Xeon Platinum 8480C processors, NVIDIA H100 accelerators, and Infiniband NDR400.
- Cores: 485,888
- Rmax (PFLOPS): 121.40
- Rpeak (PFLOPS): 188.65
- Power (kW):
#10: Sierra (previously #6)
This system, installed at the Lawrence Livermore National Laboratory, Calif., has an architecture very similar to the Summit. It is built with 4,320 nodes with two POWER9 CPUs and four NVIDIA Tesla V100 GPUs.
- Cores: 1,572,480
- Rmax (PFLOPS): 94.64
- Rpeak (PFLOPS): 125.71
- Power (kW): 7,438
Copyright © 2023 IDG Communications, Inc.