Harnessing Deep Reinforcement Learning to Construct Time-Dependent Optimal Fields for Quantum Control Dynamics

12 September 2022, Version 1
This content is a preprint and has not undergone peer review at the time of posting.

Abstract

We present an efficient deep reinforcement learning (DRL) approach to automatically construct time-dependent optimal control fields that enable desired transitions in reduced-dimensional chemical systems. Our DRL approach gives impressive performance in autonomously and efficiently constructing optimal control fields, even for cases that are difficult to converge with existing gradient-based approaches. We provide a detailed description of the algorithms and hyperparameters as well as performance metrics for our DRL-based approach. Our results demonstrate that DRL can be employed as an effective artificial intelligence approach to efficiently and autonomously design control fields in continuous quantum dynamical chemical systems.

Keywords

quantum control
optimal control
optimal control theory
theoretical chemistry
machine learning
inverse problems
electron dynamics
reinforcement learning
neural network
time dependent Schrödinger equation
electronic excited states

Supplementary materials

Title
Description
Actions
Title
Supplementary Information
Description
Additional details on algorithms and parameters for datasets used in the reinforcement learning algorithms
Actions

Comments

Comments are not moderated before they are posted, but they can be removed by the site moderators if they are found to be in contravention of our Commenting Policy [opens in a new tab] - please read this policy before you post. Comments should be used for scholarly discussion of the content in question. You can find more information about how to use the commenting feature here [opens in a new tab] .
This site is protected by reCAPTCHA and the Google Privacy Policy [opens in a new tab] and Terms of Service [opens in a new tab] apply.