Machine-learning applied to the simulation of high harmonic generation driven by structured laser beams

. High harmonic generation (HHG) is one of the richest processes in strong-field physics. It allows to up-convert laser light from the infrared domain into the extreme-ultraviolet or even soft x-rays, that can be synthesized into laser pulses as short as tens of attoseconds. The exact simulation of such highly non-linear and non-perturbative process requires to couple the laser-driven wavepacket dynamics given by the three-dimensional time-dependent Schrödinger equation (3D-TDSE) with the Maxwell equations to account for macroscopic propagation. Such calculations are extremely demanding, well beyond the state-of-the-art computational capabilities, and approximations, such as the strong field approximation, need to be used. In this work we show that the use of machine learning, in particular deep neural networks, allows to simulate macroscopic HHG within the 3D-TDSE, revealing hidden signatures in the attosecond pulse emission that are neglected in the standard approximations. Our HHG method assisted by artificial intelligence is particularly suited to simulate the generation of soft x-ray structured attosecond pulses.


Introduction
High harmonic generation (HHG) (Fig. 1) is a laser-matter process that allows to generate extreme-ultraviolet or even soft x-rays from coherent infrared light upon highly nonlinear interaction in a gas or solid target [1].The resulting radiation can be synthesized into laser pulses with a duration of few tens of attoseconds, the shortest light pulses ever created, enabling unprecedented studies of ultrafast physics at the nanoscale.
Realistic simulations of HHG that can be compared to experiments require calculations from both the quantum * e-mail: jmpablosm@usal.esmicroscopic and the macroscopic points of view.At the microscopic level, the exact calculation of HHG is given by the solution of the three-dimensional time-dependent Schrödinger equation (3D-TDSE), that describes the quantum laser-driven wavepacket dynamics in the vicinity of each atom.From the macroscopic point of view, this process has to be considered in all of the atoms involved in the experiment-trillions-and the propagation is simulated by coupling all those 3D-TDSE results with the Maxwell equations.Such calculation is extremely expensive and exceeds current computational capabilities, so approximations are required.For example, at the microscopic level the strong-field approximation (SFA) has been demonstrated to properly reproduce the main properties of the generated harmonics [2], whereas at the macroscopic level, the use of the discrete dipole approximation has been used to reproduce experimental HHG in low density gas jets [3].
During the last decade, the emergence of artificial intelligence (AI) has provided a new paradigm to perform advanced simulations, and in particular, its application in ultrafast science has provided new routes to predict the properties of x-ray pulses [4], or to speed retrieval algorithms to characterize attosecond pulses [5], among others.In this work, we use neural networks (NN) to obtain complete 3D-TDSE-based macroscopic HHG calculations driven by structured laser beams in low density gas jets, and demonstrate that HHG simulation methods can benefit from AI not only to speed-up the calculations, but to reveal hidden signatures that are neglected in standard approximations [8].

Single-atom 3D-TDSE calculation from neural networks
We have created and trained two NNs using Keras and Tensorflow (Fig. 2a) to accurately predict the single-atom HHG response.The input of our NNs is the amplitude and spatial phase of the driving pulse, which is different on each point of the space for structured laser driving beams.The output is the filtered spectrum of the emitted dipole acceleration.We focus on the higher-order harmonics, thus filtering the HHG spectrum below the 12 th harmonic orders.That way, we can get a much higher precision in the most interesting frequency range than trying to reproduce the complete spectrum.We use two NNs in parallel to predict the real and the imaginary parts of the spectrum respectively.Our training dataset contains 4 × 10 4 exact 3D-TDSE calculations, that we have generated with a GPU-accelerated implementation of the 3D-TDSE.The training of the NN that predicts the real part of the spectrum takes place in sets of 500 epochs for an increasing batch size of 2 m with m = 3, 4, 5...9.For the NN predicting the imaginary part, we considered transfer learning from the real part NN, so all weights are copied from the other NN and frozen (making their respective layers non-trainable) except the last two dense layers, which are trained in sets of 50 epochs with the same batch size methodology.In Fig. 2b,c we show the results of the training and validation of the NNs.

Macroscopic HHG simulations assisted by artificial intelligence
Once the NNs are trained and verified, we integrate the predicted dipole acceleration into the macroscopic HHG calculation through the exact solution of the Maxwell equations, following the method depicted in [3].For that purpose, we have implemented a highly parallelized application with Open Multi-Processing (OpenMP) and Message Passing Interface (MPI) that can use multiple instances of our NNs at once, drastically reducing the time required to run macroscopic simulations.
We have validated our method with simulations that compute HHG from structured driving beams (Fig. 3), an emerging field with many applications at the nanoworld [6,7].Our results [8] demonstrate that machine learning applied to HHG allows not only to speed-up the simulations, but to reveal hidden signatures in the HHG process that are neglected in the standard approximations.We demonstrate that AI applied to nonlinear phenomena such as HHG allows for the exploration of new physics at the nanometer and attosecond spatiotemporal scales.

Figure 2 .
Figure 2. a) Neural networks based on dense and transpose convolutional layers to predict the real and imaginary parts of the spectrum produced by HHG at the microscopic level.The first network is entirely trained from the scratch, while the second one gets the weighs from the other one (transfer learning) and only trains the last two dense layers.b, c) Training and validation of the NN predicting the real and imaginary part of the spectrum.

Figure 3 .
Figure 3. a) Intensity and phase profile of the driving beam composed of two collinear Laguerre-Gauss modes (ℓ 1 =1, ℓ 2 =2) we have used to validate our method.b) Far-field intensity spectrum, where the harmonics are integrated in the azimuthal coordinate.