January 29, 2018
Feature

Training Day

PNNL team explores how training neural networks can improve modeling of physical systems

rnnvseuler_Hagge

A visual comparison of standard recurrent neural networks (RNNs) with the PNNL team’s network architecture. An RNN produces time-varying outputs based on time-varying inputs, maintaining an internal state to remember information about past inputs. In the team’s architecture, the recurrent layer is restructured to perform Euler integration of a known-form differential equation. The known aspects of the dynamics are hard-coded into the recurrent integration layer. The RNN hidden state represents the physical system state and is exposed as time-varying output. Trainable weights are used to define unknown aspects of the dynamics and are trained using empirical data via standard RNN backpropagation. 

At the Deep Learning for Physical Sciences Workshop as part of the 31st Conference on Neural Information Processing Systems (NIPS) in Long Beach, Calif., Pacific Northwest National Laboratory scientists from the Computational Mathematics and National Security Data Science groups showcased their work solving ordinary differential equations with unknown rate functions using neural networks. Their model, which combines machine learning with physics-based models, takes advantage of neural networks’ ability to approximate, or ‘learn,’ unknown functions via training and creates a simple, scalable, and effective solver implemented in Google’s open-source TensorFlow code.

The Science

The PNNL team’s approach leverages the fact that differential-equation-based physical models and recurrent neural networks, while conceptually quite different, are formally dynamical systems of the same type. Removing the conceptual barrier between these approaches allows mathematical tools developed in each domain to be applied to the other.

The method is useful for studying physical systems when the system physics is partially known. For example, the conservation laws are known, but the relationships between environmental conditions and particular physical parameters, such as mixing rates, are not quantitatively understood. This approach is applicable even with missing or incomplete training data, such as when some physical state parameters are impractical or too expensive to measure with sufficient frequency.

“To a problem owner, PNNL might be someone to partner with if you have some modeling problem where measuring frequency is a challenge or a physical parameter is difficult to model,” said Tobias Hagge, a scientist with the Discrete Mathematics team and the paper’s primary author.

The Impact

The approach characterizes dynamical systems in physical sciences where fast, accurate measurements are a challenge. Adapting the neural network architecture of Google’s TensorFlow package provides scalability and GPU-processing capabilities for dynamical system modeling with little to no work on the user’s part. The architecture benefits the neural networking community by providing examples of very deep neural networks that can be formally analyzed because of their relatively simple structure. Finally, treatment of recurrent neural networks and differential equations in the same framework provides opportunities for cross-fertilization between the partial differential equation and neural network communities.

Summary

To begin, the team sought scalable solutions to the problem of training neural networks to estimate unknown functions in ordinary differential equations, casting it as a recurrent neural network training problem. Then, they extended the recurrent neural network architecture of Google’s TensorFlow, an open source software library designed for scalable neural network processing, to solve problems of this type. Their approach afforded training neural networks for an unknown function using long time-series training sets with missing data.

The Deep Learning for Physical Sciences Workshop’s aim was to unite members of the machine/deep learning and physical sciences communities to provide new approaches for solving physical science problems. Attendees focused on research questions, practical implementation challenges, performance/scaling, and unique aspects of processing and analyzing scientific datasets. The workshop was held on December 08, 2017.

Funding 
This research was supported by PNNL’s Laboratory Directed Research and Development (LDRD) as part of the Deep Learning for Scientific Discovery Agile Initiative.

Publication
Hagge T, P Stinis, E Yeung, and AM Tartakovsky. 2017. “Solving differential equations with unknown constitutive relations as recurrent neural networks.” Presented at Deep Learning for Physical Sciences Workshop, December 08, 2017, Long Beach, California. Available online at: https://dl4physicalsciences.github.io/files/nips_dlps_2017_6.pdf.

Download Publication

Key Capabilities

###

About PNNL

Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the Department of Energy’s Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://www.energy.gov/science/. For more information on PNNL, visit PNNL's News Center. Follow us on Twitter, Facebook, LinkedIn and Instagram.