Bigger, Better, Faster; HPC Enables Modern Modeling & Simulation


Bigger, Better, Faster; HPC Enables Modern Modeling & Simulation

High Performance Computing (HPC) enables the modeling and simulation (M&S) that scientists apply to the most critical, grand challenges of our time. Beyond these grand challenges, HPC is enabling us to replace real-world experiments. M&S is generally cheaper and safer, and sometimes more ethical, than conducting real-world experiments. Moreover, it’s sometimes impractical or impossible to conduct some real-world experiments, leaving HPC-enabled M&S as the only available option. For example, HPC technology allows us to simulate the detonation of nuclear devices and their effects, hurricanes, and other natural disasters in order to better prepare for potential catastrophes.

HPC-enabled M&S can also be more valuable than traditional experiments, as M&S allows the free configuration of thousands of environment and internal parameters. Imagine the hours and materials that would be needed to run a physical prototype through thousands of potential scenarios!

We're Going to Need a Bigger Computer (and Big Brains to Operate Them)

Today, M&S is considered to be the third pillar of science and has become an important tool in engineering. (The other two pillars are theory and experimentation. You can read more on the pillars and the possibilities of HPC in engineering in Hugh Thornburg’s digital twins blog.) High-fidelity multi-physics and multi-scale M&S require significantly higher performance than one could get out of a typical desktop computer or workstation. HPC refers to aggregating computing power in a way that delivers the superior performance required by contemporary M&S. This aggregated computing power is not just building faster hardware/software infrastructure (the Supercomputer); large gains come from the advanced computational science expertise available at HPC organizations.

Bigger, Better, Faster HPC Blog

Faster Than Real Time

HPC-enabled M&S can often be conducted faster than real time, relying on efficient parallel applications running on HPC systems. This speed means we can leverage M&S for efficient if-then-else analyses of different alternatives. High-fidelity M&S enabled by HPC technology is increasingly becoming an important tool in tradespace exploration in the design and development of resilient systems. For example, M&S parallel applications running on HPC systems enable deeper consideration of system-design alternatives while keeping the tradespace as open as possible to address resiliency and robustness to changing conditions and constraints.

Better Analyses

Contemporary HPC-enabled M&S often produces large and complex data sets requiring analytics workloads (e.g., machine learning, deep learning, AI) that are HPC-class challenges themselves. These analytics challenges have to be addressed by the HPC arsenal of solutions in order to be tackled in reasonable timeframes.

Application

M&S and HPC solutions supported by the DoD HPC Modernization Program User Productivity Enhancement, Technology Transfer, and Training (PETTT) program are playing an increasingly important role in many important DoD programs. There are many brilliant scientists conducting cutting-edge work to benefit our men and women in uniform, and I have the privilege of helping them access and use HPC. Engility’s PETTT team is made of world-class computational experts with experience spanning many computational areas. We roll up our sleeves and help scientists across the nation tackle really big problems that make life better and safer for our nation and the world.

Learn more by reading how HPC Helps Design Composite Materials for Next Generation Aircraft or how we are Delivering HPC Power to Researchers Enhancing the F-35.

Share this Post:

Posted by Juan Carlos Chaves

I coordinate computational scientists for the DoD HPCMP PETTT program in the Signal and Image Processing (SIP) area. My areas of interest include High-Productivity Languages such as MATLAB and its parallel extensions; Intelligence, Surveillance and Reconnaissance (ISR) applications; and efficient Parallel and Distributed HPC Workflows, including GPU-based computing. I have been involved with the DoD HPCMP in various capacities for over 20 years. I hold a Ph.D. in Computational Science and Informatics —Physics Track from George Mason University.