Scientific and personal recollections of Roberto Petronzio

This paper aims to recall some of the main contributions of Roberto Petronzio to physics, with a particular regard to the period we have been working together. His seminal contributions cover an extremely wide range of topics: the foundation of the perturbative approach to QCD, various aspects of weak interaction theory, from basic questions (e.g. the mass of the Higgs) to lattice weak interaction, lattice QCD from the beginning to most recent computations.


Introduction
Roberto Petronzio was born in 1949. He graduated with Nicola Cabibbo in 1972 in Rome University La Sapienza, where he was a researcher up to 1979. At that time he went to CERN as staff member where he remained from 1979 to 1986.
In 1987 he came back to Rome from CERN: he became a full professor of the newly founded University of Tor Vergata. He accepted to take responsibilities in the direction of this university and for many years he served as vice-rector.
From 2004 to 2011 he was president of the INFN, the Italian Institute of Nuclear Physics. This was a very demanding position: he was forced to strongly reduce his personal scientific activity, although he succeeded from time to time to work on physics problems. He was very successful in this difficult enterprise. He managed to lead the institute through a very turbulent period, maintaining independence from political power, with the same calm and determination that he was using to safely lead a sailboat in a stormy sea. The Italian scientific community is very grateful to him for having devoted so much of his time to the collective interests.
In 2011 he became president of the "Cabbibo Lab", a consortium that was supposed to construct a B-factory (the super-B project); this project was later canceled by the Italian government.
Unfortunately, in 2014 he had a dramatic health accident and he died in July 2016. I will try to divide his works into different categories that partially overlap both chronologically and scientifically.

The Roman period
The first paper of Roberto was The nucleon as a bound state of three quarks and deep inelastic phenomena. It appeared in August 1973 [10]. It was based on the very nice idea of describing the quarks wave function inside the nucleon in the p = ∞ frame using information coming from internal symmetries like SU (6). The results of this paper were later extended in [11,12] in order to get predictions for other processes like neutrino scattering and lepton production in proton-proton collisions.
It was a very interesting paper for the following reasons: • Good models for the parton distribution were quite rare at that time. The paper describes the first realistic model valid not only for the quark structure functions, but for also for the gluonic structure function. The gluonic structure function will be later crucial for computing scaling violations via the process of gluon-fragmentation into quarks.
• The model incorporates the knowledge that that time people had on symmetries, not only SU (3), but also SU(6) W .
• It stresses the importance of the p = ∞ frame, that will play a very important role in understanding scaling violations in the framework of the extended parton model during subsequent years.
The paper assumed that the physical octet of Baryons is the combination of octets belonging to the following representations of SU(6) W , the 56, l = 0 and the 70, l = 1. Let us consider the case of a nucleon with spin component J z = 1/2; this Baryon should be a linear combination of the following three states: If only the 56 representation were present, one would obtain the bound that is violated by the experimental data, hence the need of introducing the mixing with the 70 representation. At the end it was possible to have a quite accurate description of the structure functions in terms of only a few parameters.
Our first paper together [13] was quite unfortunate: Is the 3104 MeV vector meson the ψ c or the W 0 ? It was signed by G. Altarelli, N. Cabibbo, R. Petronzio, L. Maiani, G. Parisi (it was the only paper written by all these authors together). The paper presented a nice phenomenological analysis: at the end of the paper, we concluded that the 3104 MeV vector meson was the weak interaction meson W 0 , an answer that is factually wrong, in spite of the elegant arguments in the paper and of the quality of the authors. x (x, q 2 )/d ln q 2 compared with the experimental data. Curve II is obtained retaining only the octect operators in the operator expansion (taken from [14]).
Our collaboration went on producing more interesting results. During the Roman period maybe our best paper together was On the breaking of Bjorken scaling [14]. This paper contains the first computation of scaling violations in QCD taking care of the presence of gluons, see fig. (1). The paper was built on Roberto's great experience on parton wave functions inside the nucleon, especially on the gluonic contribution that was an essential component for having an agreement with the experimental results at small x: in this region gluon fragmentation into quarks is the dominant process. It is remarkable that the computation was done 1976, before the AP (Altarelli Parisi) evolution equations [15].
Roberto Petronzio continued to work on the problem of scaling violations in deep inelastic scattering. Two years later he wrote with Nicola Cabibbo The Two-stage model of hadron structure: Parton distributions and their Q 2 dependence [16], where a similar but more accurate analysis was done, now using the AP equations.

At CERN
Roberto went to CERN in 1977 and spent 10 years there. Most of his works of the first years in CERN were on QCD and weak interactions. At that time the theoretical panorama on QCD was rapidly changing. The AP equations emancipated the study of partonic quarks from the need of considering the rather complex light cone expansion [17] that played a crucial role in the initial period. The AP equations study the evolution of the effective parton distribution inside a hadron. This distribution is universal: it is the same in all the processes involving the same hadrons. However, at that time it was realized that finite perturbative corrections proportional to the running coupling constant α(q 2 ) are present and they are process-dependent.
This global picture was easy to conjecture, but it was not easy to prove [18]. The proof finally came from a seminal work with deep theoretical consequences: Relating hard QCD processes through the universality of mass singularities [19]. It was the badly needed proof that the new approach was working: it was the key that opened the door to all the new developments.
The point-like nature of QCD implied the existence of jets, of power tails in the transverse momentum distributions. However, at the time of that paper, the energy of the colliding particles was not high enough to see in a clear way the jets in the final states, also because the quark energy is partitioned between many hadrons via the process of jet fragmentation and quark recombination. On the other hand in the so-called Drell-Yan process, i.e.
the transverse momentum of the µ + + µ − pair is equal to the transverse momentum of the quark antiquarks pair that produces the virtual photon: this process allows us to measure the transverse momentum spread or the quarks inside the proton. These considerations explain why hard scattering QCD contributions were firstly computed for the Drell-Yan process: the predictions were quite neat without having to discuss the process of quark fragmentation into hadrons. Two crucial seminal contributions were given by Roberto in 1978 with the papers Transverse momentum of muon pairs produced in hadronic collisions [20] and Transverse momentum in Drell-Yan processes [21]. A careful job was done in studying the increase of the average transverse momentum squared (p 2 T ) as a function of Q 2 and of the various physical parameters: some of the results are shown in fig. (2). A problem that we had to face was the separation of the two contributions: the one coming from intrinsic spread of the quark wave function inside the nucleon and the one coming from hard processes.
Roberto was very interested in the resummations of leading logs in special processes, a problem that was studied in the case of QED, but not for QCD. The first paper on this subject is Heavy flavor multiplicities at very high energies [22]. New techniques had to be invented in order to circumvent new difficulties. The authors found the surprising result that the multiplicities increase faster than any power of the logarithm of the energy scale, i.e. they derived this behavior This result was found quite puzzling by the author themselves, and this reaction was natural: at that time a simple logarithmic increase of multiplicities was supposed to be experimentally established.
Nowadays we know experimentally that the multiplicities increase much faster than a logarithmic of the energy and the results are much less puzzling. A paper that had a long influence was Small transverse momentum distributions in hard processes by Roberto and myself [23]. We wanted to find the small transverse momentum behavior of the distribution of hard produced muon pairs as an effect of multiple gluon production. In the computation done in [20,21] an intrinsic momentum distribution was needed to avoid the singularities at p T = 0 that are present in the first order in perturbation theory. From the physical viewpoint it was clear that multiple gluon production should produce a regularization effect at small momentum, however, the detailed consequences of this phenomenon were not clear.
Many ingredients entered in the cocktail [23].
• The leading logs approximation for multiple soft gluon bremsstrahlung.
• The exponential damping of the elastic form factors.
• The different behavior of the cross sections in momentum and in impact parameter space.
One of the conclusions of that paper (that I still find surprising) is that the peak at p T = 0 flattens with a width proportional to Q 2 γ with γ = 16 25 ln(66/41) ≈ 0.305. The presence of a simple non-integer power of Q 2 is quite astonishing in a world dominated by logarithmic corrections.
Another paper that had a long and larger influence was Singlet parton densities beyond leading order [24]. This was the manifesto for next to the leading order computation in QCD. The difficulties tacked in this paper were not only related to doing the detailed computations, that were highly nontrivial; the main point was to prove for the first time that those computations were possible. The technical tool that they used was based on the explicit study of the factorization properties of mass singularities that was already present in [19]. The conclusions of the paper were that in this way "within our scheme the predictions for a particular process are obtained by convoluting a universal parton density with a short-distance cross-section specific to the process." It was the triumph for the marriage of the parton model with QCD.
These results were extended to the non-singlet cases in the papers Evolution of parton densities beyond leading order: The non-singlet case [25] and Lepton-hadron processes beyond leading order in quantum chromodynamics [26]. The techniques introduced in these papers allowed the computation Figure 3. Experimental data for the Drell-Yan process in π − nucleon collisions (i.e. 1/(q T )dσ π − N /(dQ dq T )) versus q T at s = 282 GeV 2 , Q 2 = 52.2 GeV 2 , Q 2 being the invariant mass of the muons pair. The data are compared with the theoretical predictions: the upper full line represents the next-to-leading estimate while the lowest full line represents the lower order estimate (taken from [27]). of next to the leading order results in other processes, as it was done in the paper by Lepton pair production at large transverse momentum in second order QCD [27]. This is a very remarkable paper, because it contains the first evaluation of next to the leading order effects in QCD for the p T distribution in the Drell-Yan process. The computation was quite involved because the authors had to compute the α 2 s corrections to a process of order α s . You can see from fig. (3) the importance of adding next to the leading effects in order to reach a good agreement with the experimental data.
Roberto continued to gave seminal contributions to QCD with Power corrections to the parton model in QCD [28] and Unravelling higher twists [29]. A paper that was quite ahead of its time was Momentum distribution of J/Ψ in the presence of a quark-gluon plasma [30]. The subject of the paper is quite different from the previous ones. In heavy nuclei collisions at very high energies we could have the formation of a new phase of matter, i.e. quark-gluon plasma: there were many theoretical arguments that pointed in that direction. However, it was not clear which was a good experimental signature for this phenomenon. In this paper, the authors presented for the first time their very interesting suggestion that the momentum distribution of the produced J/Ψ particles should be strongly affected by the phase transition to this new state of matter.

Weak interactions and supersimmetry
Roberto was always very interested in weak interactions. There were two remarkable papers written at the end of the seventies: Bounds on the Number and Masses of Quarks and Leptons [31] by Maiani, myself and Roberto and Bounds on the Fermions and Higgs Boson masses in grand unified theories [32] by Cabibbo, Maiani, myself and Roberto. Luciano Maiani remembers [33] that we three had many discussions (some of them via handwritten mail) on the subject concerning the first paper, but we were far from concluding. Once he went to CERN to discuss the matter with Roberto: Roberto put all the results we had on the table. He argued that the physical picture was quite clear if we combined all the information we had: after a few hours of discussions the skeleton of the draft of the paper was written.
The conclusion of the two papers was "In the framework of grand unifying theories, the requirement that no interaction becomes strong and no vacuum instability develops up to the unification energy is shown to imply upper bounds to the Fermion masses as well as upper and lower bounds to the Higgs boson mass." In the same way, a bound on the value of the top mass was obtained. The original bounds are shown in fig. (4) as a function of the top mass: the top mass was not known at that time and in the paper we derived an upper bound of about 220 GeV. Using the actual value of the top mass (i.e. 172 GeV), the bounds on the Higgs mass was quite sharp.
These bounds were quite good: they were based on the leading order for the evolution of the coupling constants: more precise and accurate bounds have been found by refining the computation by including high order terms. It is a very interesting fact, whose significance is not clear, that the experimental value of the Higgs meson is quite near to the lower bound.
Many years later Roberto came back to the study of weak interactions in the continuum when he started to be strongly interested in the fascinating problem of the supersymmetric extensions of the standard model.
A remarkable paper is Flavour changing top decays in supersymmetric extensions of the standard model [34]. In this paper, it was noticed the flavor changing top decays top → charm + Z 0 , top → charm + g, top → charm + γ, are predicted with invisible rates within the standard model and may represent a window on new physics. These processes have been considered in supersymmetric extensions of the standard model: the authors showed that observable rates can be obtained only if the SUSY breaking is non-universal and flavor dependent.
Another very interesting paper of a few years later is Probing new physics through µ−e universality in K → l + ν [35] (2006). In this paper, it was shown that supersymmetric (SUSY) extensions of the standard model can exhibit µ − e non-universal contributions. They are quite effective in constraining relevant regions of SUSY models with lepton flavor violating currents. This work was done when Roberto was president of the INFN and Roberto had to do some tricks (e.g. hiding with Masiero in a utility room) to isolate himself from the tasks related to his office [36].

Lattice QCD, the exploratory age.
At the beginning of the eighties, the central interests of Roberto already started to move toward lattice theories and lattice QCD. The subject was completely new and there was the need to understand which were the possible artifacts of lattice computations. The simplest case was the two-dimensional O(3) spin model. The physics of the model was very clear (a ferromagnetic transition that was avoided as an effect of the impossibility of having a Goldstone mode in two dimensions). Moreover, the theory was asymptotically free (like QCD) and topological effects, like instantons, were present also in this case. The simplicity of the theory allowed many detailed computations. Roberto, Martinelli and I started to perform Montecarlo simulations for this theory. In our first paper, Monte Carlo simulations for the two-dimensional O(3) nonlinear sigma model [37], we tried to study for the first time 1 the behavior of the magnetic susceptibility χ(β) at high β (low temperature). We knew from analytic computation that for large β We wanted to understand how fast the limit was reached and we were not happy because the approach was quite slow. We used lattices with L 2 points with L in the range from 30 to 80 and the results are shown in fig. (5). It was clear that we needed a much larger lattice in order to be near to the asymptotic limit. The conclusions were quite scary: if the same phenomenon was present for QCD, the whole field would be have been destroyed for a few decades: four-dimensional lattices with L = 80 are at the boundary of present-day technology.
In order to decrease lattice effects for this model, we found which was the form of the improved lattice action where O(a 2 ) corrections were absent 2 : this was done in Improving the lattice action near the continuum limit [38]. This computation was done taking care also of one loop corrections that had to be evaluated for the lattice theory. In some sense, we computed the difference between the one loop results in the continuum and one loop results in the lattice: at the end of the day, we added counter-terms in order to compensate for the difference of these two computations. This was the first of a huge family of improved actions that have been widely used in QCD and in weak interactions on the lattice.
Another remarkable paper of that time was Topological charge on the lattice: The O(3) case [39]. In this paper, the authors presented the first definition the topological charge for the two-dimensional O(3) spin model, that had the property of being insensitive to small instantons. The instanton density was computed and it was compared with the analytic results.
However, most of the fun was with lattice QCD. It was a new world that we started to explore with excitement. All the low-energy strong interaction parameters were computable. This was a complete change from the previous situation where only phenomenological arguments could be used, mostly in hand-waving arguments. Of course we knew that the measurements were affected by strong systematic effects (we started our computation with a 5 3 × 10 lattice), however, it was rather surprising to see that all quantities, one after the other, were in qualitative agreement with the experimental data.
There are so many papers in that period that I will just briefly recall them. The collaboration was floating and the author list often changed.
• We started our collaboration with the computation of the basic properties of hadrons in the quenched approximation in Hadron spectroscopy in lattice QCD [40], where the statistic and systematic errors were strongly reduced with respect to the previous papers.
• We computed the proton and neutron magnetic moments in lattice QCD [41] by measuring the mass splitting in presence of a magnetic field: in this case we found for the gyromagnetic factor of the proton the value g P = 3.0 ± 0.6 versus an experimental value of 2.79 and for the ratio of the gyromagnetic factors of the proton and of the neutron g P /g N = −1.60 ± 0.15 versus an experimental value of −1.46.
• We computed the strange hadron masses [42], in particular the Λ − Σ 0 splitting. Here the result was not too satisfactory: the sign was the correct one, but its absolute value was quite small. We argued that this was an example of a general phenomenon: all the mass splitting due to spin-spin interactions were quite small. We obtained a reasonable value to the ratio to be compared to the experimental value of 0.26. • In Boundary effects and hadron masses in lattice QCD [43] we identified a relevant contribution to the large fluctuations of hadron masses present in lattice calculations with periodic boundary conditions. This contribution is due to unphysical quark paths which are absent in the infinite volume limit. We showed that these contributions can be eliminated by averaging over possible rotations of the boundary links by the elements of the Z(3) subgroup. In this way, the "effective" volume for these paths is triplicated.
A very remarkable paper was Hadron spectrum in quenched QCD on a 10 3 × 20 lattice [44] by Lipps, Martinelli, Petronzio and Rapuano. It was real progress respect to the previous analysis on smaller lattices (5 3 , 6 3 , 8 3 ) and allowed us for the first time to investigate the systematic effect due to non-zero lattice sides. A subsequent paper was Kogut-Susskind and Wilson fermions in the quenched approximation: A Monte Carlo simulation [45] where the authors presented a systematic comparison of the results for both the Kogut-Susskind and Wilson Fermions.
Roberto was also interested to analyze the behavior of QCD without Fermions. Here the most relevant observable (beyond the glueball mass) is the string tension. However, his precise determination was quite difficult due to large statistical errors. In spite of these difficulties in [46] we computed the string tension with good accuracy. The computation was possible due to a clever trick for noise reduction (i.e. multihit) that we introduced in that paper and that became a standard tool. The gain induced by the trick was a decrease of a factor 10 in the statistical error, corresponding to a gain of a factor 100 in time. Roberto continued to work on pure gauge QCD. He wrote a very nice paper Gluon thermodynamics near the continuum limit [47] on the quark liberation phase transition (that correspond to the formation a quark-gluon plasma), a problem that we have already seen he analyzed in a subsequent paper with the same author [30].
Roberto was also among the proponents of the first APE project. He contributed to the first two papers presenting the design of the first APE computer: The APE project: a computer for lattice QCD [48] and The APE project: a gigaflop parallel processor for lattice calculations [49]. Unfortunately, the collaboration with Roberto inside the APE project could not continue due to logistic problems. During the construction of the machine the work was concentrated in Bologna (memory card), Pisa (controller and local network), Rome (floating point unit and software), but not in CERN. He was strongly involved in using the subsequent APE machines, but this part of the story will be discussed later.

Weak and electromagnetic interactions on the lattice
In 1983 a new investigation subject in lattice gauge theories was open with the paper Weak interactions on the lattice [50]: the authors showed that lattice QCD can be used to evaluate the matrix elements of four-Fermion operators which are relevant for weak decays. This was the starting point of so many computations of weak matrix elements that are very important in the testing of the standard model and in the eventual discovery of new physics.
The conclusion was that lattice QCD can be used to evaluate the matrix elements of four-Fermion operators which are relevant for weak decays. Indeed in the paper, we find many results of the kind: where the value in brackets is given on the basis of a theoretical approach based on vacuum saturation of some QCD sum rules. The authors were happy because there was a general agreement between their results and those coming from vacuum saturation of sum rules that was the most established approach at that times. The method was giving very promising results: it was the starting point of many many investigations. After this seminal work, Roberto returned to the study of weak interactions on the lattice only in the nineties. In this period, among his first papers on the subject, we find: Dynamical flavor dependence of static heavy meson decay constants on the lattice [51]. This is the first paper of a series devoted to the computation of heavy meson decay: Heavy quark masses in the continuum limit of quenched lattice QCD [52] and Quenched lattice calculation of the B → D l ν decay rate [53]. His strong interest in heavy meson decay was clearly triggered by the need of firm theoretical predictions in order to put strong constraints on possible violations of the standard model predictions.
Roberto has been always looking for very reliable predictions: he has always been very concerned about systematic errors. If the predictions were affected by uncontrolled systematic errors they were useless to discover new physics. For these reasons, he wanted to close all possible loopholes. In this regard a very important paper is Nonperturbative renormalization constants on the lattice from flavor non-singlet Ward identities [54], where the authors obtain absolute predictions for the normalization of the current on the lattice.
In more recent years, when times were mature, he started to be interested not only in weak interactions but also in full electromagnetic interactions looking for the effects of this interaction on violations of isospin symmetry. In a first paper, Isospin breaking effects due to the up-down mass difference in Lattice QCD [55] the authors consider mainly the effects of the mass differences of the quarks.
As it is well known the difference in masses of the quarks is not sufficient to explain all the isospin violations (e.g. the mass difference between the π + and the π 0 ). The effect of the real electromagnetic field has to be taken into account and this was done in a later paper, Leading isospin breaking effects on the lattice [56]. The authors presented a method to evaluate on the lattice the leading isospin breaking effects due to both the small mass difference between the up and down quarks and the QED interaction. They treated the dynamical quarks as electrically neutral particles (electroquenched approximation) and computed the charged and neutral pion mass splitting neglecting only a disconnected diagram. The final results were in very good agreement with the experimental data also for this problem that was rarely investigated before.
6 Lattice QCD, the mature age.
Roberto started again to work on large-scale simulations of pure QCD in the nineties. The situation was changed from the time of the first works in the field of ten years before. Now the field was mature. Exploratory works were already done: there was the need of controlling well the sources of possible systematic errors. Moreover given the panoply of different forms of the lattice action, it was crucial to be sure that they gave asymptotically consistent results and that there were no lattice artifacts we were not aware of. Indeed the motivations were not so different from those of the paper [54] that we have discussed above.
This motivation lead these two papers: Non-perturbative determination of the running coupling constant in quenched SU (2) (1993) [57] and Universality and the approach to the continuum limit in lattice gauge theory [58]. The results of this accurate and innovative analysis were that using a finitesize renormalization group technique it was possible to calculate the running coupling constant for quenched SU(2) with a few percent error over a range of energy varying by a factor thirty. They used a definition based on the ratio of correlations of Polyakov loops with twisted boundary conditions. At the end, the authors found that the extrapolation to the continuum limit was governed by corrections due to lattice artifacts which appear to be rather smooth and proportional to the square of the lattice spacing.
If we compare fig. (6) with fig. (5), that was done for a much simpler model, we have a vivid graphic account of the progress that has been done in a dozen of years. Roberto continued to work on lattice QCD, in parallel with his works on weak interactions. A very interesting paper of 2004 is On the discretization of physical momenta in lattice QCD [59] . As it is well known, the smallest physical momentum in a box of side L is 2π/L. The factor 2π (not a very small number) lead to a large value of the minimum non-zero momentum. This a great nuisance when we are interested to compute the momentum dependence of some observable. This difficulty has been partially removed in this paper, where the authors showed that the limitation represented by the finite volume momentum quantization rule can be overcome by using different boundary conditions for different Fermion species. The very interesting conclusion was that the method proposed can be applied to study all the quantities of phenomenological interest that would benefit from the introduction of continuous physical momenta like, for example, weak matrix elements.
In spite of the heavy work duty as president of the INFN, he continued to work on lattice QCD. A very remarkable paper of 2007 is QCD with light Wilson quarks on fine lattices: first experiences and physics results [60]. In this paper the universality of the continuum limit and the applicability of renormalized perturbation theory are tested in the SU(2) lattice gauge theory by computing two different non-perturbatively defined running couplings over a large range of energies [61]. The lattice data (which were generated on the powerful APE computers at Rome II and DESY) are extrapolated to the continuum limit by simulating sequences of lattices with decreasing spacings. The results confirmed the expected universality at all energies to a precision of a few percent. The author found, however, that perturbation theory must be used with care when matching different renormalized couplings at high energies.

Conclusions
Roberto was a very talented physicist with a very strong physical intuition; he was also a dedicated hard worker (he published about 200 papers). He wanted to work on any problem that he found interesting. He did not give too much weight to the possible fame obtained by solving the problem, but he was driven by his immense curiosity and by the joy that one has in improving his understanding. It was the same attitude that led our common mentor, Nicola Cabibbo, to repeated often Why should we work on this problem if we do not have fun?
Roberto was a very charismatic person and a great leader. He was also a highly appreciated science manager (often the two things do not go together). He had an exceptionally deep understanding of many fields of physics ranging from high energy to statistical mechanics. Roberto was a great teacher: he was able to communicate his knowledge with great enthusiasm to young students and collaborators on which he had a lasting influence [62].
The quality of human relations with all the people he was in contact was very important for him: he paid lot of attention to be kind with all the people he met and not to delude their expectations (not an easy job when you are at the head of research institute where a few thousand people work). He had a sincere interest in other people difficulties and he usually did all he could do to help them, quite often with success.
Everybody who met him will remember his smile, his positive attitude toward life. We will dearly miss him.