Satisfying positivity requirement in the Beyond Complex Langevin approach

The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.


Introduction
Our goal is to represent integrals of complex densities over a real measure by integrals of real, positive probability distributions over a complex measure. Complex Langevin method [1,2] has become a popular, and in many cases successful [3][4][5][6], approach to perform this task. However, in some situations problems with its convergence were observed [7][8][9], so this topic attracted further investigation [10][11][12][13][14] and also other methods were developed. The aim of this paper is analysis of a one-dimensional problem [15][16][17][18] to find a positive and normalizable probability distribution P(x, y) such that: for a given complex function ρ(x) and every f (x). This approach to the sign problem, although related to the Complex Langevin method [1,2], does not require introduction of the corresponding stochastic process. Instead, this method focuses only on satisfying the so-called matching conditions which follow from eq. (1). Writing eq. (1) for the moments, one obtains the following set of conditions: x r ρ(x) dx ≡ M r = (x + iy) r P(x, y) dxdy .
A proposal how this infinite set of integral equations can be solved is presented in Section 2. In particular, a simple trick to satisfy positivity of P(x, y) and numerical methods to solve resulting sets of polynomial equations are proposed. In Section 3, the numerical results for Gaussian and quartic actions are presented. The summary is provided in Section 4.

Satisfying positivity
In order to satisfy positivity of the probability distribution, one assumes that P(x, y) = |ψ(x, y)| 2 . Then: where we employ the Dirac notation and think of ψ(x, y) as a wavefunction of some quantum system in two dimensions. After expanding |ψ in a basis: e.g. the basis of two non-interacting harmonic oscillators, one obtains the following set of matching conditions: where c mn ∈ R is assumed for simplicity. Using the properties of the harmonic oscillator basis, one can evaluate ψ m n |(x + iy) r |ψ mn , so these equations are second degree polynomials in coefficients c mn . A few lowest-order equations are: The quantity of interest in this approach is the probability distribution |ψ(x, y)| 2 , not the "wavefunction" ψ(x, t) itself. An interesting side remark is that it bears resemblance to the Nelson approach to quantum mechanics [19]. This similarity is interesting from theoretical point of view and may be of profit for the numerical analysis of the problem, when searching for efficient tools to solve it.

Gröbner basis method
Each of equations in the set (5) includes infinitely many c mn 's. Approximate solutions to such a set of equations can be found by introducing a cutoff on (m, n). That is one leaves only a finite number of variables c mn (e.g. such that m + n ≤ N) and an equal number of equations. Thus, the initial problem can be reduced to solving finite sets of second degree polynomial equations. The most general approach to such a problem is Gröbner basis method (see e.g. [20]), which provides an algorithm to find common roots of any set of polynomial equations. Finding a Gröbner basis of a given set of equations can be used to simplify it to the form where the variables are succesively eliminated, i.e. such that variables x 1 , ..., x i−1 do not appear in the i-th equation.
For a brief introduction to Gröbner bases one needs to define a few concepts. First, a monomial ordering must be specified, which is a non-unique task for multivariate polynomials. The most elementary and very useful choice is lexicographic order: x α > x β :⇔ leftmost, nonzero entry in the vector (α − β) is positive, where α = (α 1 , α 2 , ..., α n ) is a vector composed of the exponents of the first monomial and x α is a shortcut for x α 1 1 x α 2 2 ...x α n n . Multidegree of a multivariate polynomial f is defined as the vector α ∈ Z n ≥0 corresponding to the largest monomial which appears in f with a non-zero coefficient. The leading monomial is LM( f ) = x multideg( f ) , and the leading term LT( f ) is the leading monomial together with its numerical coefficient. Clearly, all these quantities depend on the choice of ordering.
One also needs to define the division algorithm for multivariate polynomials [20]: Theorem Let us fix a monomial ordering. Given a multivariate polynomial f ∈ K[x 1 , ..., x n ] and a set of multivariate polynomials of the following properties: The definition of a Gröbner basis is following:

its Gröbner basis if and only if all leading monomials of linear combinations of polynomials in F:
are divisible by at least one of LM(g i ).
Notice, that in this context, a linear combination is the sum of polynomial multiples of the elements of F. The most common procedure to find a Gröbner basis is the Buchberger's algorithm. An important auxillary quantity in this algorithm is S -polynomial: where x γ is the least common multiple of leading monomials in f and g. S-polynomials are constructed so that the leading terms of the two polynomials cancel each other out.
In the first step of the Buchberger's algorithm, one takes the Gröbner basis candidate to be equal to the initial set of polynomials F = ( f 1 , ..., f s ): In the next step, for every two polynomials p and q in G, the remainder of S (p, q) on division by G is calculated: If this quantity is non-zero, i.e. if S (p, q) is not divisible by the set G, S is added to the Gröbner basis candidate G. This procedure is repeated until S (p, q) is divisible by G for every p, q ∈ G. It can be summarized as follows [20]: Example: The Gröbner basis method can be used to solve the set of equations: Following the notation used in this section, F = {x 2 y+y, xy 2 + x} and the initial Gröbner basis cadidate is G = {g 1 , g 2 } = {x 2 y+y, xy 2 + x}. In the first step, one obtains S (g 1 , g 2 ) = x 2 −y 2 , which is not further divisible by G , so S (g 1 , g 2 ) G = x 2 − y 2 and G must be expanded by g 3 = x 2 − y 2 . Similarly in the next step of the algorithm, S (g 1 , g 3 ) = S (g 1 , g 3 ) G = −y − y 3 and S (g 2 , One can check that after this extension S (g i , g j ) G = 0 for all elements of G. Therefore, the polynomials in the initial set of equations (13) can be replaced by the polynomials of the Gröbner basis: g 1 = yg 3 − g 4 and g 5 = yg 4 , so the first and the fifth equations are redundant, because they are satisfied due to the remaining ones. Also the Gröbner basis can be reduced to G = {g 2 , g 3 , g 4 }, since g 1 and g 5 are generated by g 3 and g 4 . The fourth equation involves only y and can be easily solved, y ∈ {0, i, −i}. After substituting these values to the second and third equations, one finally finds all

Solving matching conditions
The facts, that Gröbner basis method allows to find entire sets of solutions and it is exact, are its largest advantages. On the other hand, the main drawback is that the numerical complexity increases rapidly with the number of equations and variables [21]. We have therefore applied another, approximate procedure for larger sets of equations. The approximate algorithm used in this paper is a numerical search for the local minima of the sum: with respect to the parameters c 00 , c 01 , c 10 , ..., c cutoff . LHS i and RHS i are the left and the right hand side of i-th equation of a given set of polynomial equations. The starting point of the minimization procedure is chosen randomly in k-dimensional space (k is the number of variables). If the minimization procedure leads to a point (c 00 , c 01 , c 10 , ..., c cutoff ) for which the goal function (15) equals zero, this solution is also a solution to the set of polynomial equations. Then, this procedure is repeated for a large enough number of starting points to obtain all solutions of the problem.

Gaussian case
The two methods proposed in the previous section were used to find a positive probability distribution P(x, y) in the Gaussian case:

For numerical calculations Wolfram Mathematica 10 is used (procedures GroebnerBasis[] and
FindMinimum[]). Our results were compared with the exact solution, which in this case reads [15,22]: where σ R = Re σ, σ I = Im σ and r = σ R /σ I . We introduce a cutoff m + n ≤ 4, neglect c mn for odd m + n and take the following symmetry assumptions: c 20 = −c 02 , c 40 = c 04 and c 31 = −c 13 . Such a choice of symmeties is dictated by the behaviour of the coefficients c mn for the exact solution (17). One obtains a system of 6 second degree equations for 6 variables: where, on the left hand sides, the moments M r = x r ρ(x) dx are calculated for σ = (1 + i)/ √ 2. There are 12 solutions of this system of equations. The results provided by Gröbner basis method are presented and compared to the exact solution (17) in Tab. 1. Corresponding contour plots of P(x, y) are depicted in Fig. 1. The most important observation from these calculations is high nonuniqueness of the problem. Both Gröbner method and minimization method lead to a large number of solutions which increases with the number of matching conditions. Nevertheless, the solutions can be classified according to their stability with increasing the cutoff as shown in Fig. 2, where for every cutoff, the solutions are arranged into classes of solutions which are close to each other. A measure of distance used to determine proximity of solutions was: In particular, there exists a stable solution present in all steps of the algorithm and other, unstable solutions.

Quartic case
The same method can be applied to any other function ρ(x). The only difference will be the values of the moments on the RHS of eq. (5). Therefore, one can study the case of a quartic action: For numerical calculations, we have taken λ = (1 + i)/ √ 2. Similarly to the previous case, Gröbner basis method is efficient for no more than 6-8 variables, while for larger systems of equations the minimization method was used. The resulting positive distributions P(x, y) and classification of those solutions are presented in Fig. 3. Again, a class of solutions which are present independently of the cutoff is observed (the first column in Fig. 3). Other solutions repeat for different cutoffs or appear only for a particular number of matching conditions, but are not totally cutoff-independent. A reasonable conjecture is that this class remains stable also for larger number of equations and these solutions approximate an exact solution to the full set of equations (5).