Home / Technology / AI & IT / Global race to develop Exascale Supercomputers, the key to scientific revolutions to military and strategic superiority

Global race to develop Exascale Supercomputers, the key to scientific revolutions to military and strategic superiority

Science’s computing needs are growing exponentially. The next big leap in scientific computing is the race to exascale capability which computes one thousand petaflops per second that is capable of performing 1 million trillion floating-point operations per second (1 exaflops). Currently the fastest systems in the world perform between ten and 93 petaflops, or roughly one to nine percent the speed of exascale.

 

China, US, Japan and Europe are in global race for building the first exascale supercomputer by 2020- 2023. Multiple countries are competing to get to exascale first. The United States aims to have Aurora operational sometime in 2021. US Department of Energy’s (DOE) Argonne National Laboratory in Lemont, IL, will power up a calculating machine the size of 10 tennis courts and vault the country into a new age of computing. The $500-million mainframe, called Aurora, could become the world’s first “exascale” supercomputer, running an astounding 1018, or 1 quintillion, operations per second. Aurora is expected to have more than twice the peak performance of the current supercomputer record holder, a machine named Fugaku at the RIKEN Center for Computational Science in Kobe, Japan.

 

China has said it would have an exascale machine by the end of 2020, although experts outside the country have expressed doubts about this timeframe even before the delays caused by the global severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic.  The decision to develop domestically-produced processors for these systems and the inclusion of new application use cases appears to be stretching out the timelines. Engineers in Japan and the European Union are not far behind. “Everyone’s racing to exascale,” says. France has also revealed specific plan recently.   None of the efforts is expected to produce a sustained exascale machine until 2021, sustained exascale being defined as one exaflop of 64-bit performance on a real application.

 

Computing power figures to increase considerably in the coming years. Following Aurora, the DOE plans to bring online a $600-million machine named Frontier at Oak Ridge National Laboratory in TN in late 2021 and a third supercomputer, El Capitan, at Lawrence Livermore National Laboratory in CA, two years later, each of which will be more powerful than their predecessor. The European Union has a range of exascale programs in the works under its European High-Performance Computing Joint Undertaking, whereas Japan is aiming for the exascale version of Fugaku to be available to users within a couple years. China—which had no supercomputers as recently as 2001 but now boasts the fourth and fifth most powerful machines on Earth—is pursuing three exascale projects (4). China has said that it expected the first, Tianhe-3, to be complete this year, but project managers say that the coronavirus pandemic has pushed back timelines.

 

At SC18, Depei Qian, the chief scientist of the China’s national R&D project on high performance computing delivered a talk where he revealed some of the details of the three Chinese exascale prototype systems Sugon, Tianhe, and Sunway (ShenWei).  Sugon prototype is equipped with the AMD-licensed Hygon x86 processors. The advantage to this design for the supercomputing community in China is that it will maintain compatibility HPC software that’s already in production today.

 

Tianhe prototype, including the processor that will power it will be based on a Chinese-designed Arm chip, which will likely be some version of Phytium’s Xiaomi platform. The prototype of China’s new-generation exascale supercomputer Tianhe-3 has been tested for over 30 organizations in China, and it is expected to provide computing services to users in China and overseas, the National Supercomputer Center in Tianjin said. It has provided computing services for over 50 apps in fields of large aircraft, spacecraft, new generation reactors, electromagnetic simulation and pharmaceuticals, he said.

 

Sunway (Shenwei) prototype uses the ShenWei 26010 (SW26010) processor, the 260-core processor that currently powers the number three-ranked TaihuLight supercomputer. Each prototype node has two of these processors, which together deliver about 6 peak teraflops. The entire 512-node machine offers 3.13 petaflops. Although the prototype has not yet reached the goal of 1,000 petaflops, we’re planning to finish it by 2020, said Zhang Yunquan, the center director. “We’re very hopeful that it will top the list of the world’s fastest supercomputers by then.” Almost all the components were domestically manufactured including the processor, core chip units, operating, cooling and storage systems, said Zhang, also a research fellow at the state key laboratory of computer architecture in the Chinese Academy of Sciences’ institute of computing technology. “Our users can run test calculations on the prototype first and do optimizations.

 

A supercomputer can work on data from ocean exploration, meteorology, information security, space exploration, new energy and materials, modern agriculture and advanced manufacturing, the report said.  An Hong, professor of computer science with the University of Science and Technology of China in Hefei and a member of a committee advising the central government on high performance computer development, said the world’s first exascale computer would have a dedicated mission of helping China’s maritime expansion.

 

In June, the U.S. reportedly reclaimed the title of world’s most powerful supercomputer from China. Developed by Engineers at DOE’s Oak Ridge National Laboratory in Tennessee, the Summit supercomputer can perform 200,000 trillion calculations a second, or 200 petaflops, breaking the record set by China’s top-ranked Sunway TaihuLight with its processing capacity of about 125.4 petaflops, according to the U.S. Department of Energy’s Oak Ridge National Laboratory.

 

HPC plays a vital role in the design, development or analysis of many – perhaps almost all – modern weapon systems and national security systems: e.g., nuclear weapons, cyber, ships, aircraft, encryption, missile defense, precision-strike capability, and hypersonics, says Report from the NSA-DOE Technical Meeting on High Performance Computing.  National security requires the best computing available, and loss of leadership in HPC will severely compromise our national security. Loss of leadership in HPC could significantly reduce the U.S. nuclear deterrence and the sophistication of our future weapons systems. Conversely, if China fields a weapons system with new capabilities based on superior HPC, and the U.S. cannot accurately estimate its true capabilities, there is a serious possibility of over- or under-estimating the threat.

 

In June 2017, United States Department of Energy’s Exascale Computing Project announced it was awarding six companies — AMD, Cray, HPE, IBM, Intel and Nvidia — $258 million to research building the nation’s first exascale supercomputer.  Called A21, the Argonne computer will be built by Intel and Cray and is expected to supercharge simulations of everything from the formation of galaxies to the turbulent flows of gas in combustion. “With exascale we can put a lot more physics in there,” says Choong-Seock Chang, a physicist at the Princeton Plasma Physics Laboratory in New Jersey who plans to use A21 to model the plasma physics inside a fusion reactor. Supercomputer creator Cray  said in Oct 2018 that  it plans to create its first line of exascale-class supercomputers. Codenamed Shasta, the system will be 10-100 times faster than the fastest supercomputers today, Cray CEO Peter Ungaro told VentureBeat in an interview.

 

Japan, meanwhile, is currently on track to put its first exascale supercomputer into production in 2022. The post-K exascale (1,000 petaFLOPS) system project was started by Fujitsu and Japanese research institute RIKEN in October 2014. It followed the successful development of the K supercomputer, a 10.5 petaFLOPS system with 705,000 Sparc64 V111fx cores. Field trials of Fujitsu’s prototype exascale post-K supercomputer CPU have begun.

 

A new European exascale computing project, known as EuroEXA, kicked off at the Barcelona Supercomputer Center during a meeting that brought together the 16 organizations involved in the effort. EuroEXA is the latest in a series of exascale investments by the European Union (EU), which will contribute €20 million to the project over the next three and a half years.

 

Jack Dongarra, a computer scientist at the University of Tennessee in Knoxville. “Who gets there first, I don’t know.” Along with bragging rights, the nations that achieve this milestone early will have a leg up in the scientific revolutions of the future. All four of the major players—China, the United States, Japan, and the EU—have gone all-in on building out their own CPU and accelerator technologies, Sorensen says. “It’s a rebirth of interesting architectures,” he says. “There’s lots of innovation out there.”

Strategic importance of Exascale Supercomputer

Chemistry, cosmology, high-energy physics, materials science, oil exploration, and transportation will likely all benefit from exascale computing. Exascale supercomputers will enable simulations that are more complex and of higher resolution, allowing researchers to explore the molecular interactions of viruses and their hosts with unprecedented fidelity. In principle, the boost in computing power could help researchers better understand how lifesaving molecules bind to various proteins, guide biomedical experiments in HIV and cancer trials, or even aid in the design of a universal influenza vaccine. And whereas current computers can only model one percent of the human brain’s 100 billion neurons, exascale machines are expected be able to simulate 10 times more of the brain’s capabilities, in principle helping to elucidate memory and other neurological processes.

 

Exascale level computing could have an impact on almost everything, Argonne National Laboratory Distinguished Fellow Paul Messina said. It can help increase the efficiency of wind farms by determining the best locations and arrangements of turbines, as well as optimizing the design of the turbines themselves. It can also help severe weather forecasters make their models more accurate and could boost research in solar energy, nuclear energy, biofuels and combustion, among many other fields.

 

On a massively grander scale, the next generation of computers promise to offer insight into the potentially disastrous effects of climate change. Weather phenomena are prototypical examples of chaotic behavior in action, with countless minor feedback loops that have planetary-scale consequences. A coordinated effort is ongoing into building the Energy Exascale Earth System Model (E3SM), which will simulate biogeochemical and atmospheric processes over land, ocean, and ice with up to two orders of magnitude better resolution than current models (3). This should more accurately reproduce real-world observations and satellite data, helping determine where adverse effects such as sea-level rise or storm inundation might do the most damage to lives and livelihoods. Exascale power will allow climate forecasters to swiftly run thousands of simulations, introducing tiny variations in the initial conditions to better gauge the likelihood of events a hundred years hence.

 

Dr Joussaume, the chair of Prace’s Scientific Steering Committee in 2015 and an expert in climate modeling said, “European climate researchers needed access to the next generation of the most powerful machines if they were to maintain their expertise in the subject.”

 

“For example, it’s clear from some of our pilot projects that exascale computing power could help us make real progress on batteries,” Messina said. Brute computing force is not sufficient, however, Messina said; “We also need mathematical models that better represent phenomena and algorithms that can efficiently implement those models on the new computer architectures.” Given those advances, researchers will be able to sort through the massive number of chemical combinations and reactions to identify good candidates for new batteries.

 

“Computing can help us optimize. For example, let’s say that we know we want a manganese cathode with this electrolyte; with these new supercomputers, we can more easily find the optimal chemical compositions and proportions for each,” he said. Exascale computing will help researchers get a handle on what’s happening inside systems where the chemistry and physics are extremely complex. To stick with the battery example: the behavior of liquids and components within a working battery is intricate and constantly changing as the battery ages.

 

Better computers allow for more detailed simulations that more closely reproduce the physics, says Choong-Seock Chang of Princeton University. “With bigger and bigger computers, we can do more and more science, put more and more physics into the soup.” Plus, the computers allow scientists to reach their solution faster, Chang says. Otherwise, “somebody with a bigger computer already found the answer.”

 

High speed Supercomputers enable advanced computational modeling and data analytics applicable to all areas of science and engineering. They are being widely used in applications like Astrophysics, to understand stellar structure, planetary formation, galactic evolution and other interactions; Material Sciences to understand the structure and properties of materials and creation of new high-performance materials; Sophisticated climate models, which capture the effects of greenhouse gases, deforestation and other planetary changes, that have been key to understanding the effects of human behavior on the weather and climate change.

 

They are useful in Global environmental modeling for predicting the weather, earthquake and tsunami prediction, Similarly, “big data,” machine learning and predictive data analytics that have been hailed as the fourth paradigm of science, allowing researchers to extract insights from both scientific instruments and computational simulations, modeling automobile crashes, designing new drugs, and creating special effects for movies.

 

Biology and biomedicine have been transformed by access to large volumes of genetic data. Inexpensive, high throughput genetic sequencers have enabled capture of organism DNA sequences and have made possible genome-wide association studies (GWAS) for human disease and human microbiome investigations, as well as metagenomics environmental studies.

 

Resources needed to create a real-time human brain scale simulation shows we need about 1 to 10 exaflop/s with 4 petabytes of memory. One of the objectives of the Human Brain Project plan is to develop hardware architectures and software systems for visually interactive, multi-scale supercomputing moving towards the exascale.

 

Supercomputers allow plasma physicists to make simulations of fusion reactors on the variety of distance scales relevant to the ultra-hot plasma within — from a tenth of a millimeter to meters in size. Supercomputers have also become essential for National Security, for decoding encrypted messages, simulating complex ballistics models, nuclear weapon detonations and other WMD, developing new kinds of stealth technology, and cyber defence/ attack simulation.

 

 Exascale computers  critical to National Security 

Exascale computing is also key for our national security. Other countries are making plans for exascale and this could impact our security – former Defense Secretary Robert Gates said in a 2011 interview with the New York Times that one nation with a growing global presence is much farther ahead in aircraft design than our intelligence services had thought, and HPC plays a very important role in aircraft design.The scale of today’s leading HPC systems, which operate at the petascale, has put a strain on many simulation codes. “The ultimate goal of Airbus is to simulate an entire aircraft on computer,” according to Chaput, who is senior manager of flight-physics methods and tools at Airbus.

 

Paired with machine learning, exascale computers should enhance researchers’ capacity for teasing out important patterns in complex datasets. For instance, experimental nuclear fusion reactors, where superheated plasma is contained within powerful magnetic fields, have artificial intelligence (AI) programs on supercomputers that indicate when the plasma might be on the verge of becoming unstable. Computers can then adjust the magnetic fields to shepherd the plasma and keep it from breaching its constraints and hitting the walls of a reactor. Exascale machines should allow for faster reaction times and greater precision in such systems.

 

“Artificial intelligence is helping to identify relationships that are impossible to find using traditional computing,” says Paresh Kharya, who is responsible for data center product management at the AI computing platform company NVIDIA in Santa Clara, CA.

 

Supercomputers remain indispensable for the maintenance of a nuclear deterrent and the design of nuclear weapons through “virtual nuclear tests”. Supercomputers have helped Russia and China develop and deploy an entirely new generation of nuclear weapons, again without testing.

 

Exascale computers are also required by Intelligence agencies like NSA and GCHQ for counter terrorism operations. They collect vast amounts of signal intelligence like phone calls of an entire nation, listen to satellites and radio communication to identify patterns of behavior or connections between individuals and/or events that are relevant to national security. Handling of this big data is becoming an increasingly important part of the intelligence services’ surveillance programs worldwide.

 

The NSA program to monitor email communications and web surfing all over the world, XKeyscore, collected 41 billion records during a 30-day period in 2012. Enormous computing power and storage capacity is needed to process the data and find the needles in the haystacks. Some of the most interesting collected data are encrypted, and the extensive processes for decryption require huge amounts of computing power.

 

They are also essential for Cybersecurity, “Being able to process network data in real near time to see where threats are coming from, to see what kinds of connections are being made by malicious nodes on the network, to see the spread of software or malware on those networks, and being able to model and interdict and track the dynamics on the network regarding things that national security agencies are interested in,” Tim Stevens, a teaching fellow in the war studies department at King’s College London says, “those are the realms in which supercomputing has a real future.”

 

Darpa had launched Ubiquitous High Performance Computing (UHPC) program to help analyse the tidal wave of data that military systems and sensors are expected to produce.  Supercomputers’s ability to accurately forecast weather are essential for operation of military and commercial aircraft.

 

New IU Center to Leverage High-Performance Computing to Advance Hypersonic Propulsion

The U.S. Department of Energy’s National Nuclear Security Administration Advanced Simulation and Computing announced in Oct 2020 it will fund a new Center for Exascale-enabled Scramjet Design at the University of Illinois at Urbana-Champaign. U of I will receive $17 million over a five-year period.

 

Willett Professor and Head of the Department of Aerospace Engineering Jonathan Freund is the application co-director and principal investigator of CEESD. He said air-breathing hypersonic propulsion is the key to expanding access to space, enhancing defense, and accelerating global transport.

 

“The needed supersonic combustion ram jets (scramjets) have been demonstrated but are insufficiently engineered for many applications,” Freund said. “Their promise is revolutionary but their challenge is profound—to maintain combustion, with its modest flame speeds, in supersonic air flow.

 

“Advanced lightweight composite materials provide a new design paradigm that can facilitate thermal management through temperature resistance and/or strategic ablation,” Freund said. “Predictive simulations, realized by the integration of multiple physical models and performance-enabled with advanced computer science methods, will constitute a fundamental advance that circumvents testing costs that currently hinder design.”

 

“High-performance computing is enabling for our design goals, and the center will, at the same time, provide a unique educational experience,” Gropp said. “The computer science students will be trained to work effectively with computational scientists, who are facing challenging prediction goals. Likewise, computational scientists will learn computer science approaches and opportunities within the team structure.”

 

Global Race to Exascale Supercomputers

Qian also talked about more specific goals for these supercomputers. Specifically, a Chinese exascale system will provide a peak performance of one peak exaflop – so apparently ignoring the Linpack requirement that most other nations are adhering to); a minimum system memory capacity of 10 PB; an interconnect that offers HPC-style latency and scalability and delivers 500Gbps of node-to-node bandwidth, although most of these systems seem to topping out at 400Gbps; and a system-level energy efficiency of at least 30 gigaflops per watt. That 30 gigaflops/watt figure works out to about 33 megawatts for an exaflop, which is slightly higher that the 20MW to 30MW being envisioned in exascale programs in the US, Japan, and the EU – and those are for Linpack exaflops. In fact, Qian said energy efficiency is their number one challenge, the lesser ones being application performance, programmability, and resilience.

 

China is developing and building the Tianhe-3, the world’s first exascale supercomputer, a leading scientist said. When completed it will be capable of a quintillion (a billion, billion; or 1 followed by 18 zeros) calculations per second. It will be 10 times faster the current world leader, China’s Sunway TaihuLight, and will “become an important platform for national scientific development and industrial reforms”, Mr Meng Xiangfei, head of the applications department of the National Supercomputer Centre, told reporters on the sidelines of the 19th National Congress of the Communist Party of China.

 

The National Supercomputing Center in Shenzhen plans to build a next-generation supercomputer that will be 10 times faster than the world’s current speed champion, a senior executive said. “The investment is likely to hit 3 billion yuan ($470.6 million), and key technologies for the supercomputer are expected to be developed independently,” Wang Zhenglu, director of the project management department of the center told China Daily.

 

Japan is also a front-runner in the race for exascale; the nation has promised to stand up its first exascale machine, “Post-K” by early 2022. Post-K is the successor to the K computer, Japan’s current reigning number-cruncher, and will be some 100 times faster.

 

Seven European countries – France, Germany, Italy, Luxembourg, Netherlands, Portugal and Spain, recently signed an agreement to establish EuroHPC. It calls for “acquiring and deploying an integrated world-class high-performance computing infrastructure…available across the EU for scientific communities, industry and the public sector, no matter where the users are located.” The announcement was made by the European Commission. The procurement processes for the acquisition of two world-class pre-exascale supercomputers preferably starting on 2019-2020, and two world-class full exascale supercomputers preferably starting on 2022-2023.

 

“It’s a race analogous to the space race,” said Horst Simon, Deputy Laboratory Director, for Lawrence Berkeley National Laboratory, and a co-founder of the TOP500 project, which regularly ranks the worlds supercomputers. “As with the race to space,” Ms. Sarah Laskow notes, “there are many, parallel reasons for the world’s governments to want to produce the best technology in this arena.” “One is national prestige. There’s scientific discovery; but, also national security. And, there are economic spin-off effects,” “It’s a very competitive activity.

 

The US Commerce Department said in June 2019 that  it was adding several Chinese companies and a government-owned institute involved in supercomputing with military applications to its national security “entity list” that bars them from buying US parts and components without government approval.  The export restriction announcement adding the firms to what is effectively a trade blacklist is the latest effort by the Trump administration to restrict the ability of Chinese firms to gain access to US technology amid an ongoing trade war.

 

The department said it was adding Sugon, the Wuxi Jiangnan Institute of Computing Technology, Higon, Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology – along with numerous aliases of the five entities – to the list over concerns about military applications of the supercomputers they are developing.  Wuxi Jiangnan Institute of Computing Technology is owned by the 56th Research Institute of the General Staff of China’s People’s Liberation Army, the Commerce Department said, adding “its mission is to support China’s military modernization.”

 

China’s state broadcaster, China Radio International, said in an editorial  that the move was one of a series of recent actions by the United States that violated the consensus reached by President Donald Trump and his Chinese counterpart Xi Jinping in Argentina last December. “No matter whether it is aimed at suppressing Chinese technology or its long-term economic development, or put pressure on China in the trade negotiations, the United States will not achieve its aims,” it said.

 

US Exascale Initiatives

The United States’ exascale computing efforts, involving three separate machines, total US $1.8 billion for the hardware alone, says Jack Dongarra, a professor of electrical engineering and computer science at the University of Tennessee. He says exascale algorithms and applications may cost another $1.8 billion to develop. In the United States, he says, two exascale machines will be used for public research and development, including seismic analysis, weather and climate modeling, and AI research. The third will be reserved for national-security research, such as simulating nuclear weapons.

 

“The first one that’ll be deployed will be at Argonne [National Laboratory, near Chicago], an open-science lab. That goes by the name Aurora or, sometimes, A21,” Dongarra says. It will have Intel processors, with Cray developing the interconnecting fabric between the more than 200 cabinets projected to house the supercomputer. A21’s architecture will reportedly include Intel’s Optane memory modules, which represent a hybrid of DRAM and flash memory. Peak capacity for the machine should reach 1 exaflop when it’s deployed in 2021.

 

The other U.S. open-science machine, at Oak Ridge National Laboratory, in Tennessee, will be called Frontier and is projected to launch later in 2021 with a peak capacity in the neighborhood of 1.5 exaflops. Its AMD processors will be dispersed in more than 100 cabinets, with four graphics processing units for each CPU. The third, El Capitan, will be operated out of Lawrence Livermore National Laboratory, in California. Its peak capacity is also projected to come in at 1.5 exaflops. Launching sometime in 2022, El Capitan will be restricted to users in the national security field.

 

The US government under its Exascale Computing project  is giving six companies a total of $258 million to build  exascale supercomputer. Department of Energy has awarded AMD, Cray, Hewlett-Packard Enterprise (HPE), IBM, Intel and Nvidia $258 million in funding over a three-year period. The six corporations won’t depend solely on the government’s money, though — to show that they’re also fully invested in the project, they’ll cover 40 percent of the total costs that could amount to least $430 million.

 

American intelligence agency IARPA had launched “Cryogenic Computer Complexity” (C3) program to develop a exascale supercomputer, with “a simplified cooling infrastructure and a greatly reduced footprint.” The project has awarded contracts to three major technology companies: International Business Machines, Raytheon BBN Technologies and Northrop Grumman.

 

China’s homegrown technology to exascale

China’s three announced exascale projects, Dongarra says, also each have their own configurations and hardware. In part because of President Trump’s China trade war, China will be developing its own processors and high-speed interconnects.

 

Since 2013 till 2017, Chinese machines have occupied the number one slot in rankings of the world’s most powerful supercomputers. “China’s is leading the world in supercomputer application,” he said, adding that Tianhe-1 is serving more than 1,600 research institutes and companies from more than 20 provinces. Users are taking advantage of the massive computing power to scan the Earth for oil, create artificial nuclear fusion, build airplanes and maritime equipment, said Mr Meng, a delegate to the Party Congress.

 

“China is very aggressive in high-performance computing,” Dongarra notes. “Back in 2001, the Top 500 list had no Chinese machines. Today they’re dominant.” As of June 2019, China had 219 of the world’s 500 fastest supercomputers, whereas the United States had 116. (Tally together the number of petaflops in each machine and the numbers come out a little different. In terms of performance, the United States has 38 percent of the world’s HPC resources, whereas China has 30 percent.)

 

China is developing and building the Tianhe-3, the world’s first exascale supercomputer, a leading scientist said. The new supercomputer will step up development in big data and artificial intelligence, he said. It will also work with traditional industries like steel and mining to build business models, as well as creating smart-healthcare and smart cities.

 

Feng Liqiang, operational director of the Marine Science Data Centre in Qingdao, Shandong said the exascale computer would be able to pull all marine-related data sets together to perform the most comprehensive analysis ever. “It will help, for instance, the simulation of the oceans on our planet with unprecedented resolution. “The higher the resolution, the more reliable the forecast on important issues such as El Nino and climate change,” he said. “It will give China a bigger say over international affairs,” Feng added.

 

Chinese vessels, naval outposts and unmanned monitoring facilities – including a global network of buoys, satellites, sea floor sensors and underwater gliders – are generating countless steams of data every second. According to marine researchers, these data contain a rich variety of information such as sea current readings, trace chemicals, regional weather and anomalies in water density that could be used for anything from helping submarines avoid turbulence to negotiating cuts to green house gas emissions.

 

 

Japan

Japan’s future exascale machine, Fugaku, is being jointly developed by Fujitsu and Riken, using ARM architecture. Fujitsu  announced in June 2018 that its AI Bridging Cloud Infrastructure (ABCI) system has placed 5th in the world, and 1st in Japan, in the TOP500 international performance ranking of supercomputers. ABCI has also taken 8th place in the world in Green500, which ranks outstanding energy saving performance. Fujitsu developed ABCI, Japan’s fastest open AI infrastructure featuring a large-scale, power-saving cloud platform geared toward AI processing, based on a tender issued by the National Institute of Advanced Industrial Science and Technology (AIST)

 

According to Satoshi Matsuoka from the Tokyo Institute of Technology, the ABCI system design is not centered around Linpack benchmark performance. Instead, the focus will be on low precision FP & Big Data acceleration & HPC/BD/AI SW convergence. He says the performance rating is really for what he calls 130 “AI-FLOPS”, namely reduced precision arithmetic for DNN training etc

 

Japan is also developing a new supercomputer as part of a national project called Flagship2020 with the aim to deliver “100 times more application performance” than the current K, which is installed in Japan and is the world’s fifth-fastest computer according to latest top 500 rankings. The current K is based on Fujitsu’s SPARC64 VIIIfx processors, Tofu interconnect has 705,204 processing cores and offers 10.5 petaflops of performance.

 

The supercomputer will be deployed by 2020. It is being developed by Fujitsu and Japanese research institution RIKEN, which also developed K. The systems will be based on the Linux OS and the use of a “6D mesh” will be considered, according to details shared on the Supercomputing 15 website. That indicates the use of a six-dimensional design, which could facilitate connections for more simultaneous CPUs, memory and storage compared to systems today. The system will also have many storage layers, according to information on the site.

 

The team comprised of Tohoku University, NEC and JAMSTEC are investigating the feasibility of multi-vector core architecture with a high-memory bandwidth. The University of Tokyo and Fujitsu team are exploring the feasibility of a K-Computer-compatible many-core architecture. The Tsukuba and Hitachi team are studying the feasibility of an accelerator-based architecture.

 

The Riken-TokyoTech Application Team is analyzing the direction of social and scientific demands, and designing the roadmap of R&D on target applications for the 2020-time frame. With power as the primary challenge, the project is focusing not only on peak computational performance but also sustained performance per watt.

 

Europe

And not to be left out, the EU also has exascale projects in the works, the most interesting of which centers on a European processor initiative, which Dongarra speculates may use the open-source RISC-V architecture.

 

EuroEXA is the latest in a series of exascale investments by the European Union (EU), which will contribute €20 million to the project over the next three and a half years. It consolidates the research efforts of a number of separate projects initiated under the EU’s Horizon 2020 program, including ExaNeSt (exascale interconnects, storage, and cooling), EcoScale (exascale heterogeneous computing) and ExaNoDe (exascale processor and node design).

 

Exascale research in Europe is one of the grand challenges tackled by the Seventh Framework Programme for Research and Technological Development (FP7). To date, eight projects represent the Exascale research efforts funded by the European Commission (EC) under the FP7 framework with a total budget of over € 50 million: CRESTA, DEEP and DEEP-ER, EPiGRAM, EXA2CT, Mont-Blanc (part I + II) and Numexas. The challenges they address in their research are manifold: innovative approaches to algorithm and application development, system software, tools and hardware design are at the heart of the EC funded initiatives.

 

As reflected in the consolidated Horizon 2020 efforts, it will include R&D money for exascale system software, server hardware, networking, storage, cooling and datacenter technologies. The project partners include users who will bring their expertise in HPC applications areas such climate and weather, physics, and life sciences

 

The initial EuroEXA money will be spread across the 16 participating members, which span HPC centers, vendors, and user organizations in eight countries. Some of the premier government players include Spain’s BSC (Spain), Germany’s Fraunhofer-Gesellschaft, the Science and Technology Facilities Council, (STFC), the University of Manchester, and the European Centre for Medium-Range Weather Forecasts (ECMRWF), the last three of which are located in the UK. Prominent commercial organizations include ARM Limited, Maxeler Technologies, and Iceotope.

 

In Horizon 2020, the Commission will invest €700 million through the Public-Private Partnership on HPC. The newly launched projects and centres of excellence will receive €140 million in Commission funding to address challenges such as increasing the energy efficiency of HPC systems or making it easier to program and run applications on these complex machines.

 

 

 

References and Resources also include

http://www.hpcwire.com/2016/05/02/china-focuses-exascale-goals/

http://www.anl.gov/articles/messina-discusses-rewards-challenges-new-exascale-project

https://www.sciencealert.com/china-says-its-world-first-exascale-supercomputer-is-almost-complete

http://www.straitstimes.com/asia/east-asia/china-builds-tianhe-3-the-worlds-first-exascale-supercomputer-says-scientist

http://english.cas.cn/newsroom/news/201808/t20180807_195742.shtml

https://spectrum.ieee.org/computing/hardware/will-china-attain-exascale-supercomputing-in-2020

About Rajesh Uppal

Check Also

Unveiling the Dynamics of the Defense Electronics Market: Trends, Innovations, and Future Outlook

Defense electronics systems play a paramount role in modern warfare, offering critical capabilities that enable …

error: Content is protected !!