Neural network backflow for ab initio quantum chemistry (2024)

Physical Review B

covering condensed matter and materials physics
  • Highlights
  • Recent
  • Accepted
  • Collections
  • Authors
  • Referees
  • Search
  • Press
  • About
  • Editorial Team

Neural network backflow for ab initio quantum chemistry

An-Jun Liu and Bryan K. Clark
Phys. Rev. B 110, 115137 – Published 18 September 2024
  • Article
  • References
  • No Citing Articles

PDFHTMLExport Citation

Neural network backflow for ab initio quantum chemistry (1)

Abstract
Authors
Article Text
  • INTRODUCTION
  • METHODS
  • RESULTS
  • CONCLUSIONS
  • ACKNOWLEDGMENTS
  • APPENDICES
  • References

    Neural network backflow for ab initio quantum chemistry (2)

    Abstract

    The ground state of second-quantized quantum chemistry Hamiltonians provides access to an important set of chemical properties. Wave functions based on machine-learning architectures have shown promise in approximating these ground states in a variety of physical systems. In this paper, we show how to achieve state-of-the-art energies for molecular Hamiltonians using the the neural network backflow (NNBF) wave function. To accomplish this, we optimize this ansatz with a variant of the deterministic optimization scheme based on selected configuration interaction introduced by Li etal., [J. Chem. Theory Comput. 19, 8156 (2023)], which we find works better than standard Markov chain Monte Carlo sampling. For the molecules we studied, NNBF gives lower energy states than both Coupled Cluster with Single and Double excitations and other neural network quantum states. We systematically explore the role of network size as well as optimization parameters in improving the energy. We find that, while the number of hidden layers and determinants play a minor role in improving the energy, there are significant improvements in the energy from increasing the number of hidden units as well as the batch size used in optimization, with the batch size playing a more important role.

    • Neural network backflow for ab initio quantum chemistry (3)
    • Neural network backflow for ab initio quantum chemistry (4)
    • Neural network backflow for ab initio quantum chemistry (5)
    • Neural network backflow for ab initio quantum chemistry (6)
    • Neural network backflow for ab initio quantum chemistry (7)
    • Neural network backflow for ab initio quantum chemistry (8)
    • Neural network backflow for ab initio quantum chemistry (9)

    1 More

    • Received 1 June 2024
    • Revised 3 September 2024
    • Accepted 6 September 2024

    DOI:https://doi.org/10.1103/PhysRevB.110.115137

    ©2024 American Physical Society

    Physics Subject Headings (PhySH)

    1. Physical Systems

    Molecules

    1. Techniques

    Ab initio calculationsMachine learningQuantum chemistry methods

    Condensed Matter, Materials & Applied Physics

    Authors & Affiliations

    An-Jun Liu* and Bryan K. Clark

    • *Contact author: anjunjl2@illinois.edu

    Article Text (Subscription Required)

    Click to Expand

    References (Subscription Required)

    Click to Expand

    Issue

    Vol. 110, Iss. 11 — 15 September 2024

    Neural network backflow for ab initio quantum chemistry (10)
    Reuse & Permissions
    Access Options
    Neural network backflow for ab initio quantum chemistry (11)

    Article part of CHORUS

    Accepted manuscript will be available starting18 September 2025.
    Neural network backflow for ab initio quantum chemistry (14)

    Authorization Required

    Other Options
    • Buy Article »
    • Find an Institution with the Article »

    ×

    Download & Share

    PDFExportReuse & Permissions

    ×

    Images

    • Neural network backflow for ab initio quantum chemistry (15)

      Figure 1

      Illustration of the neural network backflow (NNBF) architecture with an example input configuration: two spin-orbitals with one spin-up electron occupying the first spin-orbital and one spin-down electron occupying the second spin-orbital. The neural network takes the configuration string as input and outputs a set of D configuration-dependent single-particle orbitals of shape (No,Ne). Rows of these orbitals are then selected based on the electron locations to form square matrices, from which determinants are computed. For this example, the first and last rows (the gray orbitals) are selected to compute the determinant. The sum of these determinants yields the amplitude for the input configuration.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (16)

      Figure 2

      A diagrammatic description of the workflow of fixed-size selected configuration (FSSC). Each circle represents one configuration. Initially, the algorithm initializes a parameterized wave-function and a core space V0 of size Nu=10. After each iteration, the amplitude moduli for all configurations in Vn1Cn1 are computed, and the 10 largest unique ones (denoted by red configurations) are selected to form the new core space Vn. The energy (depicted as the loss function in the orange box) and its gradient are estimated by constraining the relative sum to only consider Vn, and the latter is used to update the model parameters.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (17)

      Figure 3

      Comparison between fixed-size selected configuration (FSSC) and Markov chain Monte Carlo (MCMC) schemes on lithium oxide (No=30,Ne=14, and Nu=Nw=8192). The red band represents the posttraining MCMC inference energy for the FSSC scheme with a width of σ in each direction around the mean. A moving average window of 100 is applied to improve readability.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (18)

      Figure 4

      Dissociation curve for N2 obtained with neural network backflow (NNBF), Hartree-Fock (HF), Coupled Cluster with Single and Double excitations (CCSD), and Coupled Cluster with Single, Double and perturbative Triple excitations (CCSD(T)) methods. The full configuration interaction (FCI) energy is used as the ground-truth energy. The NNBF energy is trained using the fixed-size selected configuration (FSSC) scheme with Nu=4096<Nt, and the reported energy here is computed exactly, as it remains feasible to compute.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (19)

      Figure 5

      Effects of network architecture on the neural network backflow (NNBF) performance on CH4 (No=18,Ne=10,Nu=Nt, and L=1). Each point is one run of the same model. Double precision is employed for calculating the exact inference energy, as higher precision is necessary when the NNBF state closely approaches the true ground state. (a)Effect of network depth. The increase in performance levels off after the addition of two hidden layers to NNBF states. (b)Effect of number of hidden units. A wider hidden layer consistently improves accuracy, with the energy error decreasing at a rate of O(h2.256228) with R2=0.968270 until h=64. (c)Effect of number of determinants (h=30). Expanding the number of determinants reduces the energy error and begins to plateau after four determinants.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (20)

      Figure 6

      Effects of network architecture on neural network backflow (NNBF) performance on the Li2O with Nu=8192. (a)Effect of network depth. The improvement with more layers quickly saturates after two layers are added to NNBF states. Baselines from other network quantum states (NNQS) works have been provided for comparison: QiankunNet [31], NAQS [28], MADE [29], QiankunNet* [32], and RBM-SC [30]. The dark orange star denotes the best NNBF energy we have obtained with a greater Nu. (b)Effect of number of hidden units. Wider hidden layer continuously improves the accuracy with the absolute error drops at a rate of O(h0.671855) with R2=0.952529 until h=256. (c)Effect of number of determinants (h=64). Increasing the number of determinants reduces the energy error, but the improvement begins to plateau after reaching four determinants.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (21)

      Figure 7

      (a)Experiments exploring the impact of batch size Nu on energy improvements with (L,h,D)=(2,64,1) and (2,256,1) on lithium oxide. The energy error decreases approximately following O(Nu1.137885) with an R2 value of 0.900605 for h=64 and O(Nu1.288627) with R2=0.964273 for h=256. Some data points from Fig.6are included for comparison in the inset. The black star represents a trial trained with (L,h,D)=(2,64,1) using the sampling scheme from Ref.[30]. Its x position is determined by the average batch size at convergence. The energy closely aligns with the fitted line for the fixed-size selected configuration (FSSC) scheme, indicating that using a dynamic batch size does not offer a noticeable improvement in training. (b)Demonstration of the effectiveness of the batch size scheduling (BSS) strategy, with a moving average window of 100 applied for improved readability. (c)Experiments examining the dependence of energy improvements on batch size Nu with various h values on methane. Each data point represents the average and the standard deviation of the exact neural network backflow (NNBF) energy across three seeds.

      Reuse & Permissions

    • Neural network backflow for ab initio quantum chemistry (22)

      Figure 8

      Optimization progress for CH4, with and without configuration interaction with single and double excitations (CISD) pretraining (L=1,h=512, and Nu=Nt). The training iteration comprises 20000 steps, and a moving average window of 1000 is applied for better readability.

      Reuse & Permissions

    ×

    Neural network backflow for ab initio quantum chemistry (2024)

    References

    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Annamae Dooley

    Last Updated:

    Views: 5791

    Rating: 4.4 / 5 (65 voted)

    Reviews: 88% of readers found this page helpful

    Author information

    Name: Annamae Dooley

    Birthday: 2001-07-26

    Address: 9687 Tambra Meadow, Bradleyhaven, TN 53219

    Phone: +9316045904039

    Job: Future Coordinator

    Hobby: Archery, Couponing, Poi, Kite flying, Knitting, Rappelling, Baseball

    Introduction: My name is Annamae Dooley, I am a witty, quaint, lovely, clever, rich, sparkling, powerful person who loves writing and wants to share my knowledge and understanding with you.