With multidisciplinary expertise spanning technical pillars of high-performance computing, data science, and computational mathematics, we work toward building computational capabilities that position PNNL as a computing powerhouse. We also focus on enhancing the Science of Computing to achieve high-performance, power-efficient, and reliable computing at extreme scales for a spectrum of scientific endeavors that address significant problems of national interest, especially among PNNL’s core pursuitsenergy, the environment, national security, and fundamental science.
Detecting cyber security breaches and identifying their attack patterns in complex computing networks as they emerge in real time remains a paramount concern and growing challenge. In their work involving streaming graphs, scientists at PNNL and Washington State University, devised a novel framework, StreamWorks, that categorizes cyber attacks as graph patterns, which then can be examined using a continuous search on a single, large streaming dynamic graph. Identifying events and patterns as they emerge will go a long way in evading and mitigating the computer network intrusions that have potentially criminal, even dangerous, consequences and have made cyber security a multi-billion dollar industry.
George Em Karniadakis, a joint appointee with PNNL and Brown University, was awarded the Ralph E. Kleinman Prize, sponsored by the Society for Industrial and Applied Mathematics to recognize individual achievement for outstanding research or contributions that bridge the gap between mathematics and applications. Karniadakis will receive the Kleinman Prize during an award ceremony at the International Congress on Industrial and Applied Mathematics being held in Beijing from August 10-14, 2015.
As data sets grow increasingly large and heterogeneous, or “too Big,” their value diminishes if they cannot be mined with precision and purpose. In progressive work involving the Graph Engine for Multithreaded Systems, a multilayer software framework for querying graph databases developed at PNNL, a team of scientists used GEMS to customize commodity, distributed-memory high-performance computing clusters and apply graph algorithms to large-scale data sets on clusters. By incorporating GEMS, HPC query solutions are exploited and results are more predictable. In comparisons with other approaches, GEMS provided noticeable speedups, particularly with larger data sets. This work is featured as part of the March 2015 special issue of Computer devoted to Big Data management.
As part of the technical program during the 21st annual IEEE International Conference on High Performance Computing, or HiPC 2014, held last December in Goa, India, Abhinav Vishnu, a research scientist with PNNL’s High Performance Computing group, presented research on designing scalable communication runtime for extreme scale, highlighted in the paper, “On the Use of MPI as a PGAS Runtime.” In their work, Jeff Daily (the paper’s lead author), Vishnu, Bruce Palmer, and Darren Kerbyson, all from ACMD Division’s HPC group, along with Hubertus van Dam, of EMSL’s Molecular Science Computing capability, proposed several solutions to handle scaling problems that affect many partitioned global address space models, especially when a supercomputer comes online. They also demonstrated their approach is performance portable for high-end DOE computing systems in the field.
With the huge disparity between scales at which scientists can observe and quantify fundamental processes and ones needed to predict system behavior for environmental management, multiscale models that can be applied to pressing environmental scientific problems, such as accurately modeling how groundwater and its constituents move and react in the subsurface, are increasingly important. With this effort, scientists from PNNL and four universities have provided a general framework that focuses on hybrid methods, combining multiple environmental systems models defined at different scales in a single simulation to offer a pathway toward improved predictive capability.