Getting even with CLE

In the landscape of approaches toward the simulation of Lattice Models with complex action the Complex Langevin (CL) appears as a straightforward method with a simple, well defined setup. Its applicability, however, is controlled by certain specific conditions which are not always satisfied. We here discuss the procedures to meet these conditions and the estimation of systematic errors and present some actual achievements.

The proof of convergence relies on the holomorphy of the action and thus of the drift K, and on a well behaved distribution P at large values of the variables. Non-holomorphic behaviour is typically associated with zeroes of the measure ρ leading to a meromorphic drift K. Holomorphy can be regained by cutting small regions around the poles. The problem reduces therefore to the control of boundary terms both at ∞ and on the small contours around the poles. Their vanishing would ensure for any τ using integration by parts and thus Eq. (1) [5], [8], cf. also [2]. In practice these conditions require quantitative estimations of the behaviour of P(x, y) at large y or near the poles of the drift K(x, y) [5], which in realistic cases need on-line or a posteriori estimations. This implies approximations in interpreting the data and induces systematic errors. This is familiar from other simulations algorithms: reweighting, R-algorithm, cooling or Wilson flow, etc.
The CL process can be redefined in many ways while still leading to the same expectation values, see discussion in [6] and further work cited there. In particular for gauge theories one efficient method is to keep the process near the S U(3) manifold by gauge transformations (gauge cooling [7]) minimising some measure of non-unitarity, such as a "unitarity norm" N U = tr(U U † ) − 3, U ∈ S L(3, C). One can envisage various ways of defining such cooling procedures by minimising also other quantities, see e.g. [8]. Other transformations of the process have been investigated [10].

Lessons from simple models
We shall consider here an effective model defined as the nontrivial link U ∈ S L(3, C) of a temporal gauge Polyakov loop in the field of its neighbours. Further simple models are discussed in [2], [5]. After transforming to the Cartan basis the action reads: with H the S L(3, C) Haar measure. α ∈ C simulate the staples. The fermionic determinant is: At large µ we can setD 1. The parameters a, b have maxima of height 2 2/3 at C = 2 ∓1/3 . We expect effects from the zeroes of the determinant when a, b > 1. We present results in terms of µ for a targeted 4-dim. lattice model with N τ = 8 and κ = 0.12. The only relevant parameter is C and the interesting region is around C = 1 which means µ 1.425. The "bad" interval a, b > 1 corresponds to 1.29 < µ < 1.56. Larger κ shifts the a, b peaks to smaller µ, larger N τ makes them sharper.
To illustrate the effect of the neighbours we consider two cases ("ordered" and "disordered"):  The scatter plots for D show a peculiar structure in the "bad" interval (absent outside of it). Here the process also samples two exceptional regions (blue in the upper plots of Fig. 1) characterized by ReD < 0. They contribute wrong values for the observables. They are connected by a bottleneck at D = 0 to the "red" region, therefore ReD < 0 contributions come from trajectories which went near D = 0. Trajectories starting in "blue" typically switch to "red" and only rarely visit "blue" again.
From the relative weight of the "blue" regions one estimates a systematic error of about 1/1000, in agreement with observed deviations -see the histogram on left bottom plot of Fig. 1. Discarding the "blue" contributions leads to good results in the ordered case, see Fig. 1 right bottom plot comparing the simulation and the exact results from all and from the "red"/"blue" (w ± ) regions. See Fig. 2 for an extended comparison in the ordered case (the agreement is less good in the disordered case).

The Polyakov chain model
A Polyakov chain of N τ links allows to test procedures involving full gauge field integration of many links [7]. Since it can be reduced by gauge transformations to the one link model of Sec. 3 one can compare with exact results. On the other hand, the direct integration in the full group space over the N τ links allows to see the effects of accumulation of round-off errors, cooling, etc. This model is a step toward HD-QCD which is no longer exactly solvable. We shall consider two versions, referred to as model 1 and model 2 below: with a, b of Sec. 3 and n f the number of flavours. CL proceeds in the group algebra: with D a : covariant derivative. Model 1 is the direct extension of that in Sec. 3, model 2 is the unresummed version of model 1 [11] and has no zeroes. We test two "cooling" methods to control the drift in the noncompact direction by minimising: This proceeds by non-compact gauge transformations using the covariant gradients of the norms. In Table 1 we show typical results for β = 0.5, N τ = 8, κ = 0.12, µ = 1.4, = 10 −5 (non-adaptive). We cool after each updating step: one sweep Cooling I (1 0), one sweep Cooling II (0 1), or both (1 1). The best results are obtained with more cooling sweeps. Remarks: -It appears important to start in or near the S U(3) manifold and to cool during thermalisation; -The value of N U can vary (if it stays below 0.1), especially with (1,1) cooling; -For model 1 cooling also seems to improve concerning the effects of the non-holomorphy.

HD-QCD
HD-QCD arises in the limit κ → 0, µ → ∞ where the determinant of QCD becomes a product of factors Eq. (4) over all Polyakov loops [12]. Calculations have explored among other things the phase diagram of the model , see e.g. [13] (reweighting), [14] (CL). As before, the correctness of the results depends on the weight of the ReD < 0 regions, which was there typically O(10 −2 − 10 −4 ) and we found a corresponding systematic error of the same magnitude. Now this weight and the associated systematic error are difficult to estimate, due to the superposition of single regions in the product of factors which define the determinant. We therefore need direct simulations and tests. Comparing CLE and reweighting in the region where the latter is reliable (µ < 1 with phase rewighting) [13] we have observed generally good agreement but increasing deviations for β ≤ 5.7 [7]. Lower temperatures require therefore large N τ which would permit to work at large β. This appears to be a question of computer power but further tests may be necessary.

Full QCD in the κ expansion
Splitting the fermionic determinant into temporal and spatial contributions we evaluate the first factor analytically (HD-QCD) and use a hopping expansion for the spatial part [15], [16]. This was shown to converge towards full QCD at sensible parameter choices already around order ten [15].
As we have learned from HD-QCD we need β > 5.7 for the gauge cooling to be effective. Here we stay with β = 6. We present for illustration preliminary O(κ 10 ) results on lattices of 10 3 × N τ at β = 6, κ = 0.12, aimed at the phase diagram of QCD. For an investigative study trying to reach low temperatures we use large N τ but are aware of the finite size effects induced by the wrong aspect ratio.
In the upper two plots of Fig. 3 we fix a rather low temperature N τ = 16 and vary µ. It appears that we can explore both the confining and the deconfining phase. At µ = 0 the gauge cooling ensures that the results agree with those obtained by reunitarisation, which is equivalent to a real Langevin simulation (blue and green dots at µ = 0 on the plots). Notice that the agreement may be lost by choosing an inadequate β < 6 where cooling becomes ineffective [17], [18].
In the lower two plots of Fig. 3 we vary the temperature T = 1/N τ by increasing N τ at fixed µ. Starting at high temperature in the deconfining phase for µ ≤ 1.1 we see the signal of the transition to the confining phase at T ∼ 0.1 (N τ 10), especially perspicuous in the Polyakov loops. At µ ≥ 1.8 we see fully saturated density at low temperature and slightly decreasing at higher T .
In the region 0 ≤ µ ≤ 1.1 cooling is effective and we observe the silver blaze phenomenon expected in the confining region already at N τ = 16. Likewise the region of µ ≥ 1.8 is accessible and shows the expected saturation effects. We also see from both sides the signal of approaching a transition, but we could not enter the region 1.1 < µ < 1.8 in order to resolve it with only standard gauge cooling. For the further developments we might be forced to resort to further procedures such as dynamical stabilisation [10] and other improvements, [19]. This is work in progress. See also [20].

Further prospects: S U(2) real time simulations
Revisiting direct simulations of the Minkowski action in S U(2) [21], [22], the use of gauge cooling [7] and dynamical stabilisation [10] show promising improvements. Various integration contours are possible [22], [23], we found as optimal an isosceles triangle with base of extension τ along the imaginary axis and of extension t 0 in the real time direction, discretised in N t points. The left plot of Fig. 4 depicts the maximum possible real time extent t 0 of the contour for fixed τ = 16 and N t = 16. Here gauge fixing fails but gauge cooling and dynamical stabilisation allow for a non-zero real-time extent in an interesting β-region. Note that for small N t there can be deviations in the expectation value of the spatial plaquette between the euclidean and the real-time contour. This difference however becomes smaller as one increases N t . The right plot depicts the difference as a function of N t at fixed t 0 . . The discretisation of t 0 , N t , is relevant. Larger β allow larger real time extents. The maximum extent of t 0 strongly depends on τ. For smaller τ larger ratios t 0 /τ are possible.

Conclusions
Gauge cooling [7] (and its further developments) appear essential for controlling the behaviour of the CL distributions both in the far non-compact direction and near the poles [5], see also [1]. The efficiency of the CL method for lattice gauge theories therefore strongly depends on the effectiveness of cooling. The latter can be judged from the unitarity norm N U which is expected to stabilise at some value fixed by the dynamics. In the simulation numerical errors may eventually lead to a divergence of N U , but as long as N U stays bounded we can sample reliable results. The evolution of the N U under cooling permits to judge the efficiency of the latter and therefore the reliability of the results -see Fig. 5. Isolated peaks do not spoil the simulation as long as there are sufficiently long intervals of small unitarity norm. Removing the occasional peaks and selecting the intervals of small N U leads then to well behaved distributions and correct convergence of the simulation (remember that cooling is a gauge transformation). For tests of other cooling procedures see, e.g. [8], [9], [10].
For QCD lattice calculations at µ > 0 our results in a 10-th order κ expansion show that simulations in the confinement and deconfining phases are possible for β large enough. Work in progress concerns resolving the phase transition region. Low temperatures require large lattices in order to ensure a good aspect ratio. A lesson from the study of simple models is to check the appearence of domains with ReD < 0, drop their contributions and estimate a possible systematic error. As shown in [5] the signal is already visible in the observables and can thus be easily monitored.
For SU(2) real time evolution cooling and dynamical stabilisation are shown to improve the simulations also for smaller β, future work is however still necessary to achieve realistic scenarios.