January 15, 1996
Journal Article

Toward High Performance Computational Chemistry: II. A Scalable Self-Consistent Field Program

Abstract

We discuss issues in developing scalable parallel algorithms and focus on the distribution, as opposed to the replication, of key data structures. Replication of large data structures limits the maximum calculation size by imposing a low ratio of processors to memory. Only applications which distribute both data and computation across processors are truly scalable. The use of shared data structures that may be independently accessed by each process even in a distributed memory environment greatly simplifies development and provides a significant performance enhancement. We describe tools we have developed to support this programming paradigm. These tools are used to develop a highly efficient and scalable algorithm to perform self-consistent field calculations on molecular systems. A simple and classical strip-mining algorithm suffices to achieve an efficient and scalable Fock matrix construction in which all matrices are fully distributed. By strip mining over atoms, we also exploit all available sparsity and pave the way to adopting more sophisticated methods for summation of the Coulomb and exchange interactions.

Revised: August 20, 2019 | Published: January 15, 1996

Citation

Harrison R.J., M.F. Guest, R.A. Kendall, D.E. Bernholdt, A.T. Wong, M.S. Stave, and J.L. Anchell, et al. 1996. Toward High Performance Computational Chemistry: II. A Scalable Self-Consistent Field Program. Journal of Computational Chemistry 17, no. 1:124-132. PNL-SA-23841. doi:10.1002/(SICI)1096-987X(19960115)17:13.0.CO;2-N