We focus on merging high-performance computing with data-centric analysis capabilities to solve significant problems in energy, the environment, and national security. PNNL has made scientific breakthroughs and advanced frontiers in high-performance computer science, computational biology and bioinformatics, subsurface simulation modeling, and multiscale mathematics.
Hiding the complexities that underpin exascale system operations from application developers is a critical challenge facing teams designing next-generation supercomputers. To tackle the problem, PNNL computer scientists are developing formal design processes based on Concurrent Collections (CnC), a programming model that combines task and data parallelism. Using graphs, they transformed LULESH proxy application code that models hydrodynamics into a complete CnC specification. These specifications capture data and control dependencies and separate computations from implementation issues, concealing the complexities of exascale systems, dramatically decreasing development cost, and increasing opportunities for automatic performance optimizations.
PNNL Computer Scientists Share Editing Duties for Journal of Parallel and Distributed Computing Special Issue
Dr. John Feo and Dr. Antonino Tumeo, of CSMD's Data Intensive Scientific and High Performance Computing groups, respectively, will serve as guest editors for a special issue of the Journal of Parallel and Distributed Computing devoted to "Architectures and Algorithms for Irregular Applications." The special issue will explore new solutions for efficient design, development, and execution of irregular applications in current and future computing system architectures.
To improve the numerical methods and algorithms used to analyze and model physical phenomena associated with fluid flows and the forces that affect them at various scales and boundary conditions, scientists from PNNL and the University of South Florida demonstrated the viability of a new method: smoothed particle hydrodynamics-continuous boundary force, or SPH-CBF. Their novel method solves Navier-Stokes equations subjected to Robin boundary conditions using an SPH method for solving partial differential equations, providing a significant advancement to existing SPH theory. The formulation also uses SPH strengths in modeling diverse physics problems, such as those involving atmospheric systems, energy materials and processes, subsurface flow and transport, and high-strength materials, which are relevant to important DOE mission objectives.
To meet DOE's power consumption goals (20-25 MW) for exascale computing systems and remain practical tools, next-generation systems must be considerably more power and energy efficient than today's supercomputers. Using methods that span processor architecture and system integration to performance and power modeling, scientists within PNNL's Performance and Architecture Laboratory, or PAL, have developed power-aware algorithms that use an accurate per-core proxy power sensor model to estimate the active power of each core. Their methods also have afforded the first workload-specific quantitative power modeling capability that accurately captures workload phases, their impact on power consumption, and the effects of system architecture and processor clock speeds.
To conduct longer, more complex molecular simulations, scientists from EMSL; PNNL; the University of Chicago; and the University of California, San Diego, developed and assessed parallel in time algorithms that expand the time scale of calculations once limited by scaling. With these timings, meaningful dynamic simulations of realistic, complex materials phenomena or problems can be demonstrated using intricate computer simulations of complex physical motions of atoms and molecules. Moreover, these timings afford real-world applicable dynamic simulations in new energy research areas, such as nuclear waste storage, carbon sequestration, energy storage materials, and efficient catalysis.