Radiative Corrections for a Precision Determination of the Fine Structure Constant

We discuss the implications of a new proposed approach to determine aHLO μ and αQED by using space-like kinematics.


Introduction
This talk is dedicated to the memory of Lev Nikolaevich Lipatov .
Why physicists carry such complex, lenghty and cumbersome calculations. Marcus Tullius Cicero states [1]: "Historia magistra vitae (est)". Let us, therefore, recall a few examples. Let us start with Tychonis ( Tyco ) Brahe who collected for decades a huge amount of sky observations and of astronomical data on the positions of the planets with no telescopes at naked eye. Johannes Kepler did analyse the Brahe's data and did publish in 1609 the book "Astronomia Nova" [2]. We all know what was the result of these observations and of their interpretation: the three laws of planetary motion. We still have, as well, the pages of Kepler's log book. These pages and the density of Kepler's mathematical work do speak by themselves of the amount of work that Kepler did to collect and process the data of Brahe's observations. Sir Isaac Newton was great and skilled not only in the production of ideas but also in many elaborate calculations. To come to this workshop Stefano Laporta will present us the result of his work that lasted several years. Although Stefano has accustomed us to the complexity and accuracy of his elaborate computational techniques, the results of this last one are really something extraordinary. We could go on to list many other examples of physicists who, for years, have engaged themselves in complex and challenging calculations crucial for understanding aspects of primary importance in physics. Considering Quantum Electrodynamics in a renowned article of more than sixty years ago [4], Freeman J. Dyson [5] quotes an interview to Dyson in which he is stating:" I always felt was a miracle that electrons actually behaved the way the theory said". And later: "Truth to me means agreeing with the experiments,...For a theory to be true it has to describe accurately what really happens in the experiments". Moreover " The nature of a future theory is not a profitable subject for theoretical speculations.
[A] future theory will be built first upon the results of future experiments" [5]. Despite the serious doubts of P. A. M. Dirac [6] about the whole renormalization procedure we continue, stubbornly, to evaluate α QED with extremely accurate measurements [7]. Concerning the theoretical evaluations the determination of α QED strongly depends on the accurate evaluation of the radiative corrections. Radiative corrections have started to play a prominent role only when the field has become assessed and mature and only when also the collection of experimental data has become abundant and accurate. These are the conditions to define a solid theoretical and experimental basis for further developments. The case we are considering in this talk requires a well founded approach both experimentally and theoretically. The leading-order hadronic contribution to the muon g-2 is given by the well-known formula where Π had (s) is the hadronic part of the photon vacuum polarization, > 0, and is a positive kernel function with m µ the muon mass. As the total cross section for hadron production in lowenergy e + e − annihilations is related to the imaginary part of Π had (s) via the optical theorem, the dispersion integral in eq. (1) is computed integrating experimental time-like (s > 0) data up to a certain value of s. The high-energy tail of the integral can be calculated by using perturbative QCD.
Alternatively, if we exchange the x and s integrations in eq. (1) we obtain: where Π had (t) = Π had (t) − Π had (0) and is a space-like squared four-momentum. If we invert eq. (4), we get x = (1 − β) (t/2m 2 µ ), with β = (1−4m 2 µ /t) 1/2 , and from eq. (3) we obtain Equation (5) has been used for lattice QCD calculations of a HLO µ [8]; while the results are not yet competitive with those obtained with the dispersive approach via time-like data, their errors are expected to decrease significantly in the next few years [8].
The effective fine-structure constant at squared momentum transfer q 2 is defined above and is ∆α(q 2 ) = −ReΠ(q 2 ). The purely leptonic part, ∆α lep (q 2 ), can be calculated order-by-order in perturbation theory -it is known up to three loops in QED and up to four loops in specific q 2 limits. As ImΠ(q 2 ) = 0 for negative q 2 , eq. (3) can be rewritten in the form We are going to proceed differently, we are proposing [9] to calculate eq. (6) by measurements of the effective electromagnetic coupling in the space-like region. The hadronic contribution to the running of α in the space-like region, ∆α had (t), can be extracted by comparing Bhabha scattering data theory predictions. However the Bhabha cross section receives contributions from t-and s-channel photon exchange amplitudes. Always within the space-like approach, an alternative possibility has been investigated [10]. It consists in using a fixed target µ → e scattering process and to analyse the elastic muon distribution in the forward region.
A new experiment has to be devised to measure the running of the fine-structure constant in the space-like region by scattering high-energy muons on atomic electrons of a low-Z target through the process µe → µe. The differential cross section of this process, measured as a function of the squared momentum transfer t = q 2 < 0, provides direct sensitivity to the leading-order hadronic contribution to the muon anomaly a HLO µ . By using a muon beam of 150 GeV, with an average rate of ∼ 1.3 × 10 7 muon/s, currently available at the CERN North Area, a statistical uncertainty of ∼ 0.3% can be achieved on a HLO µ after two years of data taking. As the Bhabha process also this direct measurement of a HLO µ will provide an independent determination, potentially competitive with the time-like dispersive approach, to consolidate, with a firmer interpretation of the measurements of the future muon g-2 experiments, the theoretical prediction for the muon g-2 in the Standard Model. In this workshop there will be the talks of Umberto Marconi, Marina Marinkovic, Pierpaolo Mastrolia and Fulvio Piccinini examining varius implications of this idea. A series of preliminary considerations on the detector have been discussed [10]. We consider a possible setup to measure the following observables: • direction and momentum of the incident muon; • directions of the outgoing electron and muon.
The CERN muon beam M2, used at 150 GeV, has the characteristics needed for such a measurement. The beam intensity appears to be adequate to provide the required event yield. The beam time structure allows to tag the incident muon while keeping low the background related to incoming particles (e.g. electrons). The electrons contamination is very small. The beam provides both positive and negative muons, which we plan to use.
The target consists of atomic electrons. To reach the required statistics, the target must consist of an adequate amount of material to give a sufficient number of electron scattering centres. The target has to be made of a low-Z material to minimize the impact of multiple scattering and the background due to bremsstrahlung and pair production processes. A promising idea for the detector is to use 20 layers of Be (or C) coupled to Si planes, spaced by intermediate air gaps, located at a relative distance of one meter from each other. The arrangement provides both a distributed target with low-Z and the tracking system. As downstream particle identifiers we plan to use a calorimeter for the electrons and a muon system for the muons (a filter plus active planes). This particle identifier system is required to solve the muon-electron ambiguity for electron scattering angles around (2-3) mrad. The preliminary studies of such an ap-Give the exact title of the conference paratus, performed by using GEANT4, indicate an angular resolution for the outgoing particles of ∼ 0.02 mrad.
The detector acceptance must cover the region of the signal, with the electron emitted at extremely forward angles and high energies, as well as the normalization region, where the electron has much lower energy (around 1 GeV) and an emission angle of some tens of mrad. The boosted kinematics of the process allows the detector to cover almost 100% of the acceptance.
The incoming muons have to be tagged and their direction and momentum precisely measured. The angle of the scattered electron and muon are correlated [10]. This constraint is extremely important to select elastic scattering events, rejecting background events from radiative or inelastic processes and to minimize systematic effects in the determination of t. Note that for scattering angles of (2-3) mrad there can be an ambiguity between the outgoing electron and muon, as their angles and momenta are similar. To associate them correctly it is necessary to identify the two particles by means of downstream dedicated detectors (calorimeter and muon detectors). In order to perform the planned measurement to the required precision in addition to a dedicated detector an extremely accurate evaluation of the QED scattering amplitude is necessary. A series of systematic uncertainties should be taken into account.
Significant contributions of the hadronic vacuum polarization to the µe → µe differential cross section are essentially restricted to electron scattering angles below 10 mrad, corresponding to electron energies above 10 GeV. The net effect of these contributions is to increase the cross section by a few per mille: a precise determination of a HLO µ requires not only high statistics, but also a high systematic accuracy, as the final goal of the experiment is equivalent to a determination of the differential cross section with ∼10 ppm systematic uncertainty at the peak of the integrand function.
Such an accuracy can be achieved if the efficiency is kept highly uniform over the entire q 2 range, including the normalization region, and over all the detector components. This motivates the choice of a purely angular measurement: an acceptance of tens of mrad can be covered with a single sensor of modern silicon detectors, positioned at a distance of about one meter from the target. It has to be stressed that particle identification (electromagnetic calorimeter and muon filter) is necessary to solve the electron-muon ambiguity in the region below 5 mrad. The wrong assignment probability can be measured with the data by using the rate of muon-muon and electron-electron events.
Another requirement for reaching very high accuracy is to measure all the relevant contributions to systematic uncertainties from the data themselves. An important effect, which distinguishes the normalization from the signal region, is multiple scattering, as the electron energy in this region is as low as 1 GeV. In addition, multiple scattering in general causes acoplanarity, while two-body events are planar, within resolution. These facts allow multiple scattering effects to be modelled and measured by using data.
In experiments dedicated to high-precision measurements, several systematic effects can be explored within the experiment itself. In this respect the proposed modularity of the apparatus will help. A test with a single module could provide a proof-of-concept of the proposed methods.
From the theoretical point of view, the control of the systematic uncertainties requires the development of highprecision Monte Carlo tools, including the relevant radiative corrections to reach the needed theoretical precision. To this aim, QED radiative corrections at leadinglogarithmic level resummed at all orders of perturbation theory and matched to the exact O(α) correction are mandatory in order to reach a theoretical precision at the level of O(10 −5 ) on the differential cross section. Moreover, by using the ratio of the cross sections in the signal and normalization regions, we expect that the theoretical uncertainties will be further reduced to the level of O(10 −5 ), due to partial cancellation of common radiative corrections. Work is in progress to extend to µe → µe scattering and to quantify the actual accuracy on the computation of the ratio of cross sections by means of dedicated Monte Carlo simulations. Any further improvement in the theoretical accuracy would require the matching of QED resummation of leading and NNLO corrections with exact two-loop corrections, which are not yet available at present for the µe → µe process but are within reach.
All these requirements set an unprecedented standard of accuracy both theoretically and experimentally.
The my knowledge the closest example of a comparable accuracy for a process within QED with a multiple particle final state is represented by the evaluation of the Bhabha cross-section in the small angle limit for the determination of the LEP luminosity. I would like to remember my personal experience. Together with Nikolay Merenkov, Victor Fadin, Eduard Kuraev, and Lev Lipatov ( Andrei Arbuzov joined us later ), in 1992 we started to work to reach the highest possible precision in the evaluation of the Bhabha cross section in the small angle limit. In a series of papers [11] we developed a comprehensive method to sistematically take into account the various contributions needed to assess the aimed, in those days unprecedented, precision. At the end the precision reached was estimated to be of the O( 10 −3 ) [12]. The work on the Bhabha continued even in the following years and a number of further contributions were calculated and have been added [13].
In order to reach a more accurate determination of the fundamental parameters as α QED and a HLO µ experimental and theoretical improvements, tests, calculations, have to be worked out. Even if the goal may look challenging and difficult history of the past experiences encourages us not to give up and to go forward.

Acknowledgments
I would like to thank the organizers for inviting me to this lively workshop.