April 20, 2024
Report

Quantum Neural Networks: Issues, Training, and Applications

Abstract

Our work in the field aims at explaining the limitations and expressive power of Quantum Machine Learning models, as well as finding feasible training algorithms that could be implemented in near-term Quantum Computers. The promise of Quantum Machine Learning is that by incorporating quantum effects, such as entanglement, into machine learning models researchers can improve model performance and understand more complex datasets. This pledge is particularly pronounced in the design of Quantum neural networks (QNNs), a promising framework for creating quantum algorithms, that promise to outperform classical models by combining the speedups of quantum computation with the widespread successes of deep learning. We show that applying this approach alone to quantum deep learning is problematic given that an excess of entanglement between the hidden and visible layers can destroy the predictive power of our QNN models. We address the barren plateau problem by suggesting the use of a generative, unbounded, nonlinear loss function with simple gradients. The loss function quantifies how much the quantum states generated by the QNNs differ from the data and the goal during training is to minimize it. Finally, we showcase how to use generative training to construct a "classical-quantum" neural network to accurately interpolate between the ground states of a Molecular Hamiltonian, a central question in Quantum Chemistry.

Published: April 20, 2024

Citation

Ortiz Marrero C.M., N. Wiebe, J.C. Furches, and M.J. Ragone. 2023. Quantum Neural Networks: Issues, Training, and Applications Richland, WA: Pacific Northwest National Laboratory.