Home / Technology / AI & IT / US Regains TOP500 Crown with Summit Supercomputer, China gradually increasing its lead over other countries in Supercomputer race

US Regains TOP500 Crown with Summit Supercomputer, China gradually increasing its lead over other countries in Supercomputer race

A supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). There are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS, measured in P(eta)FLOPS.

 

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.

 

China, US,  and Japan  are in global race of developing the fastest supercomputer. For the first time since November 2012, the US claims the most powerful supercomputer in the world, leading a significant turnover in which four of the five top systems were either new or substantially upgraded. Summit, an IBM-built supercomputer now running at the Department of Energy’s (DOE) Oak Ridge National Laboratory (ORNL), captured the number one spot with a performance of 122.3 petaflops on High Performance Linpack (HPL), the benchmark used to rank the TOP500 list. “Summit will push the boundaries of computing and human understanding,”  said Ginni Rometty, Chairman, President, and CEO, IBM.

 

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, drops to number two after leading the list for the past two years. Its HPL mark of 93 petaflops has remained unchanged since it came online in June 2016.

 

Sierra, a new system at the DOE’s Lawrence Livermore National Laboratory took the number three spot, delivering 71.6 petaflops on HPL.

 

Tianhe-2A, also known as Milky Way-2A, moved down two notches into the number four spot, despite receiving a major upgrade that replaced its five-year-old Xeon Phi accelerators with custom-built Matrix-2000 coprocessors.

 

The new AI Bridging Cloud Infrastructure (ABCI) is the fifth-ranked system on the list, with an HPL mark of 19.9 petaflops. The Fujitsu-built supercomputer is powered by 20-core Xeon Gold processors along with NVIDIA Tesla V100 GPUs. It’s installed in Japan at the National Institute of Advanced Industrial Science and Technology (AIST).

 

Despite the ascendance of the US at the top of the rankings, the country now claims only 124 systems on the list, a new low. Just six months ago, the US had 145 systems. Meanwhile, China improved its representation to 206 total systems, compared to 202 on the last list.

 

“Supercomputers in China are driven by the government, which has made huge investments as they want to take the lead in the TOP500,” said Professor Francis Lee from NTU’s School of Computer Science and Engineering. “And they are pursuing research in areas like astronomy and seismic simulation, which require a lot of computational power.”

 

December 2016 report based on meetings between the DOE and National Security Agency that warned US leadership in high-performance computing was under immediate threat unless the US committed to a decade-long “surge” in investments to compete with China’s accelerating development in high-performance computing, or HPC.
But now, U.S. supercomputing researchers are striking back.  However, thanks mainly to Summit and Sierra, the US did manage to take the lead back from China in the performance category. Systems installed in the US now contribute 38.2 percent of the aggregate installed performance, with China in second place with 29.1 percent.
The European Union is planning to spend one billion euros on supercomputers to help with research into creating artificial intelligence and fighting climate change. Brussels officials said Europe was “lagging behind” on supercomputers, noting that none of the world’s top ten most powerful machines were in the EU.

Supercomputers essential for National Security

High speed Supercomputers enable advanced computational modeling and data analytics applicable to all areas of science and engineering. They are being widely used in applications like Astrophysics, to understand stellar structure, planetary formation, galactic evolution and other interactions; Material Sciences to understand the structure and properties of materials and creation of new high-performance materials; Sophisticated climate models, which capture the effects of greenhouse gases, deforestation and other planetary changes, that have been key to understanding the effects of human behavior on the weather and climate change.

 

They are useful in Global environmental modeling for predicting the weather, earthquake and tsunami prediction, Similarly, “big data,” machine learning and predictive data analytics that have been hailed as the fourth paradigm of science, allowing researchers to extract insights from both scientific instruments and computational simulations, modeling automobile crashes, designing new drugs, and creating special effects for movies.

 

Better computers allow for more detailed simulations that more closely reproduce the physics, says Choong-Seock Chang of Princeton University. “With bigger and bigger computers, we can do more and more science, put more and more physics into the soup.” Plus, the computers allow scientists to reach their solution faster, Chang says. Otherwise, “somebody with a bigger computer already found the answer.”

 

Computing can help us optimize. For example in  battery the behavior of liquids and components within a working battery is intricate and constantly changing as the battery ages.  For example, let’s say that we know we want a manganese cathode with this electrolyte; with these new supercomputers, we can more easily find the optimal chemical compositions and proportions for each.

 

Biology and biomedicine have been transformed by access to large volumes of genetic data. Inexpensive, high throughput genetic sequencers have enabled capture of organism DNA sequences and have made possible genome-wide association studies (GWAS) for human disease and human microbiome investigations, as well as metagenomics environmental studies. Supercomputers allow plasma physicists to make simulations of fusion reactors on the variety of distance scales relevant to the ultra-hot plasma within — from a tenth of a millimeter to meters in size.

 

 

Deep learning on Summit could help scientists identify materials for better batteries, more resilient building materials and more efficient semiconductors. By training algorithms to predict materials’ properties, researchers may answer longstanding questions about materials’ behaviors at atomic scales, say IBM

 

Supercomputers have also become essential for National Security, for decoding encrypted messages, simulating complex ballistics models, nuclear weapon detonations and other WMD, developing new kinds of stealth technology, and cyber defence/ attack simulation.

 

Top 500 Race

Summit, an IBM-built supercomputer

Summit (and its sister machine, Sierra, at Lawrence Livermore National Laboratory) represent a major shift from how IBM structured previous systems. IBM developed a new computing architecture that combines high-performance POWER9 CPUs with AI-optimized GPUs from our partner NVIDIA — all linked at extremely high speeds and bandwidth.

 

In Summit’s new architecture, compute is embedded everywhere data resides, producing incredible speed and creating a system purpose-built for AI. “By building these supercomputers, we are building the world’s leading AI machines,” says Hillery Hunter, IBM Fellow; Director, Accelerated Cognitive Infrastructure.

Another radical shift is that Summit is built with components available to any enterprise — this technology is part of IBM’s product line, available to accelerate every business.

Summit has 4,356 nodes, each one equipped with two 22-core Power9 CPUs, and six NVIDIA Tesla V100 GPUs. The nodes are linked together with a Mellanox dual-rail EDR InfiniBand network.

Sunway TaihuLight

Sunway TaihuLight at the National Supercomputing Center in Wuxi, west of Shanghai, according to TOP500. Developed by China’s National Research Center of Parallel Computer Engineering & Technology using entirely Chinese-designed processors, It’s ranking is unchanged from 2016.

 

The complete system has a theoretical peak performance of 125.4 Pflop/s with 10,649,600 cores, 1.31 PB of primary memory, and 20 PB of storage. It is based on the SW26010 processor developed by the Shanghai High Performance IC Design Center, designed and built in China using 28nm fabrication technology. It features the Shenwei-64 instruction set, a RISC (Reduced Instruction Set Computing) architecture that was also developed indigenously. This is the first Chinese supercomputer based upon an indigenous design and using indigenous manufacturing.

 

The TaihuLight system uses Sunway Raise OS 2.0.5, based on Linux. The software stack includes basic compiler components such as C/C++ and Fortran compilers, an automatic vectorization tool, and basic math libraries. The system also includes Sunway OpenACC, a customized parallel compilation tool that extends OpenACC to unique characteristics of the SW26010 processor.

 

Tianhe-2, or Milky Way-2

Tianhe-2A, also known as Milky Way-2A, has received  a major upgrade that replaced its five-year-old Xeon Phi accelerators with custom-built Matrix-2000 coprocessors. The new hardware increased the system’s HPL performance from 33.9 petaflops to 61.4 petaflops, while bumping up its power consumption by less than four percent. Tianhe-2A was developed by China’s National University of Defense Technology (NUDT) and is installed at the National Supercomputer Center in Guangzhou, China.

 

Earlier  Tianhe-2 system was based on Intel Xeon CPUs and Xeon Phi accelerators, both of which were banned for sale to four Chinese organizations including the National University of Defense Technology by the U.S. Department of Commerce in early April.

 

Japan’s AI Bridging Cloud Infrastructure (ABCI)

Fujitsu announced in June 2018 that its AI Bridging Cloud Infrastructure (ABCI) system has placed 5th in the world, and 1st in Japan, in the TOP500 international performance ranking of supercomputers. ABCI has also taken 8th place in the world in Green500, which ranks outstanding energy saving performance. Fujitsu developed ABCI, Japan’s fastest open AI infrastructure featuring a large-scale, power-saving cloud platform geared toward AI processing, based on a tender issued by the National Institute of Advanced Industrial Science and Technology (AIST)

 

ABCI is a large-scale cloud platform focused on AI applications, consisting of 1,088 Fujitsu Server PRIMERGY CX2570 M4 x86 servers, each equipped with two Intel® Xeon® Scalable family processors and four NVIDIA® Tesla® V100 accelerators, the latest GPU computing card.

 

The supercomputer is expected to run at a speed of 130 petaflops, surpassing the current champion, China’s Sunway TaihuLight, currently operating at 93 petaflops. The ABCI could help Japanese companies develop and improve driverless cars, robotics and medical diagnostics, explains Satoshi Sekiguchi, a director general at Japan’s ‎National Institute of Advanced Industrial Science and Technology. “A supercomputer is an extremely important tool for accelerating the advancement in such fields,” he says. Its supersonic speed will also help Japan develop advances in artificial intelligence technologies, such as “deep learning.”

 

Green500 results

The top three positions in the Green500 are all taken by supercomputers installed in Japan that are based on the ZettaScaler-2.2 architecture using PEZY-SC2 accelerators, while all other system in the top 10 use NVIDIA GPUs. The world’s Green500  supercomputers are ranked by energy efficiency, measured in flops/watt.

 

The most energy-efficient supercomputer is once again the Shoubu system B, a ZettaScaler-2.2 system installed at the Advanced Center for Computing and Communication, RIKEN, Japan. It was remeasured and achieved 18.4 gigaflops/watt during its 858 teraflops Linpack performance run. It is ranked number 362 in the TOP500 list.

 

The second-most energy-efficient system is Suiren2 system at the High Energy Accelerator Research Organization/KEK, Japan. This ZettaScaler-2.2 system achieved 16.8 gigaflops/watt and is listed at position 421 in the TOP500. Number three on the Green500 is the Sakura system, which is installed at PEZY Computing. It achieved 16.7 gigaflops/watt and occupies position 388 on the TOP500 list.

 

 

Europe

Europe’s fastest supercomputer is located in Italy, the CINECA, while the second fastest is located in the UK, and belongs to the Met Office. It is ranked number 15 in the world overall, according to the Top500 list of machines.

 

Piz Daint, a Cray XC50 system installed at the Swiss National Supercomputing Centre (CSCS) in Lugano, Switzerland, maintains its number three position with 19.59 petaflops, reaffirming its status as the most powerful supercomputer in Europe. Piz Daint was upgraded last year with NVIDIA Tesla P100 GPUs, which more than doubled its HPL performance of 9.77 petaflops.

 

 

Titan, a five-year-old Cray XK7 system installed at the Department of Energy’s (DOE) Oak Ridge National Laboratory, and still the largest system in the US, slips down to number five. Its 17.59 petaflops are mainly the result of its NVIDIA K20x GPU accelerators.

 

 

 

Technology trends

Almost all the supercomputers on the list (97.8 percent) are powered by main processors with eight or more cores and more than half (53.2 percent) have over 16 cores. The vast majority of chips in the Top500 list are Intel’s Xeon or Xeon Phi processors, which power 464 of the systems, with the remainder being IBM Power or AMD Opteron CPUs.

 

Accelerators are used in 110 TOP500 systems, a slight increase from the 101 accelerated systems in the November 2017 lists. Nvidia GPUs are the most popular accelerators, present in  98 of these systems, including five of the top 10: Summit, Sierra, ABCI, Piz Daint, and Titan. Seven systems are equipped with Xeon Phi coprocessors; PEZY accelerators are used in four systems; and the Matrix-2000 coprocessor is used in a single machine, the upgraded Tianhe-2A. An additional 20 systems use Xeon Phi as the main processing unit. HPE was the dominant system vendor, with 144 of its systems in the top 500, while Cray leads in the performance stakes, representing 21.4 percent of the list’s total performance.

 

Ethernet, 10G or faster, is now used in 247 systems, up from 228 six months ago. InfiniBand is found on 139 systems, down from 163 on the previous list. Intel’s Omni-Path technology is in 38 systems, slightly up from 35 six months ago.

 

Most of the world’s fastest 500 supercomputers run Linux-based operating systems.

 

The race for the best supercomputer is about more than just power and processing speed. Energy efficiency is also a critical element that should be a large consideration in high performance computing (HPC), said Natalie Bates, one of leaders of the Energy Efficient High Performance Computing Working Group (EE HPC WG).

 

The next generation of systems being designed today will consume 30-50MW (vs. 5-15MW systems we run today). Energy efficiency and power management will be key to reduce operating costs. A major focus will be to select the right hardware architecture to provide the most energy efficient computing for different applications.

 

 

 

 

References and Resources also include:

http://nypost.com/2016/06/20/the-fastest-computer-in-the-world-uses-no-us-technology/

https://www.top500.org/news/us-regains-top500-crown-with-summit-supercomputer-sierra-grabs-number-three-spot/

https://www.independent.co.uk/news/uk/politics/supercomputers-eu-budget-1-billion-euros-fastest-computer-in-the-world-europe-commission-a8153231.html

https://www.rdmag.com/article/2018/08/green-supercomputing-why-energy-efficiency-just-key-performance-hpc

About Rajesh Uppal

Check Also

Demystifying Software Requirement Specification (SRS) Documents: A Comprehensive Guide

The requirements phase is one of the most critical phases in software engineering. Studies show …

error: Content is protected !!