Parton distribution functions on the lattice and in the continuum

Ioffe-time distributions, which are functions of the Ioffe-time $\nu$, are the Fourier transforms of parton distribution functions with respect to the momentum fraction variable $x$. These distributions can be obtained from suitable equal time, quark bilinear hadronic matrix elements which can be calculated from first principles in lattice QCD, as it has been recently argued. In this talk I present the first numerical calculation of the Ioffe-time distributions of the nucleon in the quenched approximation.


Introduction
The introduction of quasi parton distribution functions (quasi-PDFs) by X. Ji [1] opened a new avenue for non-perturbative calculations of parton distribution functions (PDFs) circumventing a number of difficulties Euclidean lattice QCD calculations had to face. As discussed in detail in [2][3][4], one computes in lattice QCD certain hadronic matrix elements that are related to parton distribution functions (PDFs) from which PDFs can be extracted in a similar manner with which PDFs are extracted from experimental crossections. Several works numerically implementing these ideas [5][6][7][8][9] have already appeared in the literature. Methods for renormalizing these matrix elements have been constructed [10][11][12][13][14][15] and the necessary matching of quasi-PDFs to the light-cone PDFs has been worked out [16,17].
One of the issues quasi-PDFs calculations have to face is that the matrix elements have to be performed with large momentum hadron states. The momentum has to be large enough in order for the non-perturbative effects to be suppressed, allowing for the perturbative matching formulas (currently one loop) to work. As discussed in [18,19], non-perturbative evolution may dominate the approach of quasi-PDFs to the PDF limit. The understanding that these non-perturbative effects are related to the transverse structure of the hadron led to a new suggestion of how one may extract PDFs from lattice QCD [19]. In this talk, results from our first numerical exploration [20] are presented.

Formalism
In order to compute parton distribution functions in lattice QCD one may start from the equal time hadronic matrix element with the quark and anti-quark fields separated by a finite distance. For non- is introduced. HereÊ(0, z; A) is the 0 → z straight-line gauge link in the fundamental representation, τ 3 is the flavor Pauli matrix, and γ a is an appropriate gamma matrix. By Lorentz invariance, this matrix element can be decomposed as From the M p (−(zp), −z 2 ) part the twist-2 contribution to PDFs can be obtained in the limit z 2 → 0. By taking z = (0, 0, 0, z 3 ), α in the temporal direction i.e. α = 0, and the hadron momentum p = (p 0 , 0, 0, p) the z α -part drops out. In addition, we define the Lorentz invariant ν = −(zp), which is known in the literature as the Ioffe time [21,22]. With these definitions It should be noted here that the quasi-PDF Q(y, p) is related to M p (ν, z 2 3 ) by Furthermore, we introduce the Ioffe time PDFs M(ν, z 2 3 ) defined at a scale µ 2 = 1/z 2 3 which are the Fourier transform of regular PDFs f (x, µ 2 ) as The scale dependence of the Ioffe time PDF can be derived from the DGLAP evolution of the regular PDFs. The Ioffe time PDFs satisfy the following evolution equation where C F = 4/3, and B(u) is the leading-order evolution kernel for the non-singlet quark PDF [22]. The [. . .] + denotes the "plus" prescription, i.e.
The Ioffe time PDFs can now be obtained from the matrix element of Eq. 3 in the small z 3 limit as However, the corrections of O(z 2 3 ) are large rendering the above relation practically useless. On the other hand by the conservation of the vector current one knows that As was argued in [19] the z 2 3 corrections in both Eq. 7 and Eq. 8 are related to the transverse structure of the hadron and hopefully cancel approximately in a ratio. Therefore it was argued that has much smaller O(z 2 3 ) corrections and therefore this ratio could be used to extract the Ioffe time PDFs in a more practical manner. Furthermore the ratio in Eq. 9 has a well defined continuum limit and does not require renormalization. Therefore, it is well suited for lattice QCD calculations. pa Ea t/a

Numerical tests
We performed lattice QCD calculations in the quenched approximation at β = 6.0 on 32 3 × 64 lattices, with the non-perturbatively tuned clover fermion action with the clover coefficients computed by the Alpha collaboration [23]. Correlation functions were computed on a total of 500 configurations separated by 1000 updates, each one consisting of four over-relaxation and one heatbath sweeps. On each configuration we computed correlation functions from six randomly selected smeared around a point sources. The pion and nucleon masses determined to be 601(1) MeV and 1411(4)MeV respectively. The lattice spacing in our calculation is a = 0.093 fm as determined by the Alpha collaboration [24].
The maximum nucleon momentum used in our calculation was 2.5 GeV. Inside this momentum range, the continuum dispersion relation for the nucleon was satisfied within the errors of the calculation, indicating small lattice artifacts of O(ap). In figure 1(left) we plot the nucleon energy as a function of momentum along with the continuum dispersion relation.
The computation of the matrix elements was performed using the methodology described in [25] with an operator insertion given by Eq. (1).
Following [25] we need to compute a regular nucleon two point function given by where N p (t) is a helicity averaged, non-relativistic nucleon interpolating field with momentum p. The quark fields in N p (t) are smeared with a gauge invariant Gaussian smearing. This choice of an interpolation field is known to couple well to the nucleon ground state (see discussion in [25]). The quark smearing width was optimized to give good overlap with the nucleon ground state within the range of momenta in our calculation. In addition we need the correlator given by and τ 3 being the flavor Pauli matrix. The proton momentum and the displacement of the quark fields were both taken along theẑ axis ( z = z 3ẑ and p = pẑ). We define the effective matrix element as As it was shown in [25], the desired matrix element J of Eq. (1) can be extracted at the large Euclidean time separation as where p 0 is the energy of the nucleon. The resulting effective matrix element has contamination from excited states that scale as e −t∆E , where t is the Euclidean time separation of the nucleon creation and annihilation operators, and ∆E is the mass gap to the first excited state of the nucleon. This contamination can be fitted away and this is what we did in our calculations. This methodology allows for the computation of all nucleon matrix elements that correspond to different nucleon momentum and spin polarization as well as different diagonal flavor structures in the matrix element without additional computational cost. This fact, together with our emphasis on having as many nucleon momentum states as possible, makes the total computational cost of this approach less than the equivalent cost of performing the calculations with the sequential source method. This approach has recently been successfully used for both single and multi-nucleon matrix element calculations [26][27][28].
In order to renormalize our lattice matrix elements we note that, for z 3 = 0, the matrix element M(z 3 p, z 2 3 ) corresponds to a local vector (iso-vector) current, and therefore should be equal to 1. However, on the lattice this is not the case due to lattice artifacts. Therefore we introduce a renormalization constant The factor Z p has to be independent from p. However, again due to lattice artifacts or potential fitting systematics, this is not the case. For this reason, we renormalize the matrix element for each momentum with its own Z p factor taking this way advantage of maximal statistical correlations to reduce statistical errors, as well as the cancellation of lattice artifacts in the ratio. Therefore, our matrix element is extracted using the double ratio which takes care of the renormalization of the vector current according to Eq. (14). In practice, the infinite t limit is obtained with a fit to a constant for a suitable choice of a fitting range. In all cases we studied, the average χ 2 per degree of freedom was O(1). Typical fits used to extract the reduced matrix element are presented in Fig. 1. All fits are performed with the full covariance matrix and the error bars are determined with the jackknife method. The reduced matrix element defined in Eq. (15) has a well defined continuum limit and no additional renormalization is required. This continuum limit is obtained at fixed ν and z 2 as well as at fixed quark mass.

Results
In Fig. 2 , we plot the ratio M(ν, z 2 3 ) as a function of the Ioffe time ν. As one can see, the data approximately fall on the same curve. For the imaginary part, the situation is similar. This phenomenon can be understood if an approximate factorization of the longitudinal and transverse structure of the hadron occurs.
The real part M R (ν) of the Ioffe-time distribution is obtained from the cosine Fourier transform where the function q + (x) = q(x) +q(x), which may be also represented as q + (x) = q v (x) + 2q(x) . In order to guide the eye we also plotted the corresponding Ioffe time distribution resulting from a PDF with the following form If we neglect the antiquark contribution and use q + (x) = q v (x), then both the real and imaginary parts are obtained by the cosine and sine Fourier transforms of the above function. This function was chosen to result in a curve that closely follows the data for the real part. As argued in [20] the fact that the imaginary part data seem not to be following the sine transform curve is an indication of non-zero anti-quark densities. By looking more closely at the data plotted as a function of the Ioffe time we can see that there is a residual z 3 -dependence. This is more visible when, for a particular ν, there are several data points corresponding to different values of z 3 . This is expected because as discussed above different values of z 2 3 for the same ν correspond to the Ioffe time distribution at different scales. It is interesting to check whether the residual scatter in the data points is consistent with evolution. By solving the evolution equation at leading order, the Ioffe time PDF at z 3 can be related to that at z 3 by the following relation This formula is only applicable at small z 3 and therefore we check its effect to our data at values of z 3 ≤ 4a which correspond to energy scales larger than 500 MeV. More specifically, we fix the point z 3 at the value z 0 = 2a corresponding, at the leading logarithm level, to the MS-scheme scale µ 0 = 1 GeV and evolve the rest of the points to that scale. The real part data of our Ioffe time PDF are shown in left panel of Fig. 3 (top). As one can see, there is a visible scatter of the data points. Using α s /π = 0.1, we calculate the "evolved" data points corresponding to the function M(ν, z 2 0 ). The results are shown in the right panel of Fig. 3 (top). The evolved data points are now very close to a universal curve indicating that the original scatter was due to the evolution of the Ioffe time PDF with z 2 3 . The imaginary part data of our Ioffe time PDF, which are shown in Fig. 3 (botom) (left: original data, right: evolved), were evolved in the same manner, also support this claim.
It is reasonable to expect that leading order evolution cannot be extended to very low scales. However, it is known that evolution stops below a certain scale. By observing our data we can deduce that this is indeed the case for length scales z 3 ≥ 6a. We therefore adopt an evolution that leaves the PDF unchanged for length scales above z 3 = 6a and use the leading perturbative evolution formula to evolve to smaller z 3 scales. The resulting evolved data are presented in Fig. 4. By fitting these evolved points with a cosine Fourier transform M(ν; a, b) of the normalized valence PDF q v (x) satisfying N(a, b)x a (1 − x) b we obtain a = 0.36(6) and b = 3.95 (22), where the errors are statistical only. This fit with its corresponding error-band is also plotted in Fig. 4.
Treating z 3 = 2a as the MS scale µ = 1 GeV, one can further evolve the curve to the standard reference scale µ 2 = 4 GeV 2 of the global fits, see right panel of Fig. 4 1 . Our resulting curve follows closely the phenomenologically expected behavior. Given our limited range of ν the small x results 1 We are very grateful to Nobuo Sato who performed this numerical evolution and provided the figure.
x are prone to larger systematic errors that need to be studied more carefully. However, it is clear that in order to obtain results that reproduce the experimentally determined PDFs one needs to perform more realistic dynamical fermion calculations including quarks with physical masses as well as treat evolution at higher accuracy than the one we used here.

Summary
In this talk a new approach for obtaining PDFs from lattice QCD calculations is presented. We introduce a ratio of matrix elements that takes care of UV divergences allowing for a well defined continuum limit. In addition, this ratio has improved convergence properties to the light-cone limit allowing for a practical method for performing these calculations with realistic computational resources. We tested this approach in the quenched approximation in order to understand the basic features of the method and work out the details of the methodology. An important finding of our calculations is that our data are consistent with the well known scale evolution of the PDFs. Armed with the lessons obtained by the quenched approximation, we are currently applying this approach to realistic lattice QCD calculations aiming towards obtaining a precise determination of PDFs from lattice QCD.