Home / Technology / AI & IT / DoD’s High Performance Computing Modernization Program to accelerate the development and acquisition of advanced military capabilites

DoD’s High Performance Computing Modernization Program to accelerate the development and acquisition of advanced military capabilites

Earlier in the year 2018,  Hewlett Packard Enterprise (HPE) announced that it had been awarded a large $57m contract from the US Department of Defense (DoD) to provide supercomputers. As supercomputing has become an ever bigger part of the toolset of the department’s scientists and engineers innovating around the most complex technological challenges, the thirst for ever more high performance computing (HPC) has only increased. Under the contract, HPE is to provide the DoD High Performance Computing Modernization Program (HPCMP) with supercomputing capability and support services to “accelerate the development and acquisition of advanced national security capabilities,” according to a statement.

 

HPC tools are utilised by the military to solve some of the most complicated and time-consuming problems thrown up by technological development. Researchers can expand their toolkit to solve modern military and security problems using HPC hardware and software. So far HPC has been thrown at a wide variety of problems including assessing technical and management risks, such as performance, time, available resources, cost, and schedule.
According to the DoD, HPCMP supports its objectives through research, development, test, and evaluation. It allows scientists and engineers to focus on science and technology to solve complex defence challenges that could benefit from HPC innovation. To date, under HPCMP, the department has awarded a large variety of contracts both big and small that total an investment in HPC in the hundreds of millions of dollars.

 

In a statement at the time of the award in February, Bill Mannel, vice president and general manager, HPC and AI, HPE said: “In our data-driven world, supercomputing is increasingly becoming a key to stay ahead of competition – this applies to national defence just as to commercial enterprises. This latest HPCMP contract includes the build and delivery of a total of seven HPE SGI 8600 systems: four for the US Air Force Research Laboratory (AFRL) DoD Supercomputing Resource Center (DSRC) near Dayton, Ohio, and three for the US Navy DSRC Air Force Research Laboratory DSRC at Stennis Space Center, Mississippi.

 

The HPC machines are part of HPE’s SGI range. HPE bought SGI in 2016 at a cost of $275m with a specific eye towards capturing more government business. The HPE SGI 8600 was introduced last year and builds on the proven SGI ICE XA architecture. It leverages a collection of engineering innovations that were made by SGI to enable delivery of the best possible performance for an industry-standard clustered HPC system, according to HPE. These innovations include powerful CPUs, fast memory, the latest GPU technologies, high speed interconnect topologies and tuning of the Message Passing Interface. The HPE SGI 8600 is designed to scale to thousands of nodes as well as being optimised for deep learning through NVIDIA SXM2 compute trays.

 

HPCMP, the High Performance Computing Modernization Program

The HPCMP was originally initiated by the DoD in 1992 in response to a congressional directive to modernise its laboratories’ HPC capabilities. The resulting mandate allowed the DoD to consolidate a number of smaller high performance computing departments, each with its own history of supercomputing experience, that had independently evolved within the three services laboratories and test centres.

 

HPCMP, the High Performance Computing Modernization Program. HPCMP provides the DoD with massive supercomputing capabilities and all classes of networks to transport data, as well as software to help the DoD execute trillions of computations per second in support of its development programs.

 

At the recent 20th Annual Systems Engineering Conference, the keynote speaker, Vice Adm. Paul Grosklags, commander of Naval Air Systems Command, had a simple message: “We need to increase the speed of capability development.” He said the way to do that was to design, develop and sustain fully integrated capabilities in a model-based digital environment. Models can compress the traditional design-build test-development cycle that takes anywhere from 10 to 30 years.  Models are physics-based, high-fidelity tools that can be used to rapidly test new designs, performance attributes and further develop a system before any metal is cut.

 

 

Some of the Defense Department’s plans for the new supercomputers involve developing new helicopters, Newmeyer said. Although contractors like Boeing (BA, -0.05%) and Lockheed Martin’s Sikorsky typically develop and build aircraft for the military, the department also contributes to the aircraft design plans to save time, he explained.

 

The supercomputers will help the Defense Department simulate wind tunnels for testing the software models of helicopters prior to them being built, Newmeyer said. This limits the chance of errors once the physical helicopters are made and eventually tested in actual wind tunnels.

 

Done right, a computational approach to systems development has the potential to cut years from development timelines and billions off acquisition costs — it is far less costly to fix system performance problems at the digital twin stage than it is when the system is in low-rate initial production.

 

The DoD has only begun to experiment with fully integrated, model-based design, but the benefits are already clear. The Army’s Joint Multi-Role Technology Demonstrator program used HPCMP capabilities to do an independent analysis of contractor proposals to winnow four concepts down to two prior to additional development.

 

In another example, the Army rotorcraft program, together with Boeing, used HPCMP models to generate early design-stage predictions of helicopter performance for a proposed rotor blade upgrade of the CH-47F Chinook. The computational model accurately predicted up to a 10 percent hover thrust performance improvement without material degradation of the forward-flight performance.

 

In Navy applications, computational modeling has been used to generate tens of thousands of ship designs with varying hull forms and configurations to allow acquisition authorities to down-select a design much earlier in the acquisition process. The list of DoD modeling applications is endless, covering domains as varied as radar cross-section analysis, propulsion technologies, aerodynamics and ground mobility studies, to name just a few.

 

 

References and Resources also include:

https://defence.nridigital.com/global_defence_technology_may18/how_supercomputing_could_change_warfare_forever_with_hpe

https://www.defensenews.com/opinion/commentary/2018/01/11/the-dod-office-youve-never-heard-of-and-why-thats-about-to-change/

 

 

 

About Rajesh Uppal

Check Also

DARPA COOP: Guaranteeing Software Correctness Through Hardware and Physics

Introduction In an era where software reliability is paramount, ensuring continuous correctness in mission-critical systems …

error: Content is protected !!