The need for a multicomponent UHECR Observatory

In the past fifteen years, UHECR hybrid detection systems – combinations of fluorescence techniques with very large ground arrays (over 103 km) – have provided data sets of unprecedented statistics and quality However, the paucity of events above the GZK cut-off combined with the limited duty cycle of the fluorescence detectors calls for further progress in the detection techniques and larger aperture. Above 50 EeV the current world data sets are of the order of 200 events out of which only a handful have been observed in hybrid mode. Hence, while the spectrum feature as well as anisotropy studies can be performed, although with limitation, a proper identification of the primary particle is out of reach. I argue here that the next generation observatory should reach an aperture of several 104 km using detectors able to measure both the muonic and electromagnetic component of each individual extensive air shower in order to provide the necessary information to pin down the primary cosmic ray nature and possibly point back at their sources.


INTRODUCTION
The origin of masses and the existence of the Higgs boson, the symmetries of Nature and the reality of SUSY, the unification of forces and the quantization of Gravity, the origin and fate of the Universe, the nature of dark energy and dark matter are some of today's fundamental physics questions that are best studied at very high energies. To approach the critical energy regions either we build very large accelerators such as the LHC or we look into the sky with larger and larger telescopes to capture the traces of those phenomena. The observation and detailed studies of UHECRs are part of this quest for the highest energies. Although far from being as controlled and precise as laboratory experiments, cosmic ray observations, which, in the past, have provided fundamental data on particles and their interactions, represent today the unique possibility to access center-of-mass energies up to 450 TeV.
At the highest energies, above 10 19 eV (10 EeV or 1.6 Joules), cosmic rays are scarce (less than one per km 2 per year) and essentially unidentified. This situation dramatically limits the contribution of UHECR physics to the above mentioned problems. However, if one could access the nature of the primary particle and the details of the cascade evolution and content, a lot of information would be collected on the nature of hadronic interactions above 100 TeV center-of-mass and on the sources of such energetic particles. Aiming at an excellent primary cosmic ray identification, multi-parametric measurements of Extensive Air Showers (EAS) have the potential of measuring hadronic cross sections above 100 TeV and up to 450 TeV center-of-mass, constraining interaction models, detecting or setting limits on UHE neutrino or gamma ray fluxes, identifying UHECR sources, and constraining Galactic and intergalactic magnetic fields.

EPJ Web of Conferences
Such a collection of data regarding particle interactions and cascade developments at the highest energies and regarding the nature and content of our Universe will certainly play an invaluable role in our understanding of fundamental physics.

WHERE DO WE STAND?
Soon after the discovery of the Cosmic Microwave Background (CMB) by Penzias and Wilson in 1964 [1], Greisen, Zatsepin and Kuz'min (GZK) predicted that the cosmic ray energy spectrum should cut off at about 50 EeV due to the photo-production of pions by the cosmic ray protons on the microwave photon background [2]. With the continuous observation of trans-GZK events between 1962 and the end of the 90's, the existence of such a cut-off remained unconfirmed for nearly 50 years. While based on a reaction (the pion photo-production) well measured in laboratories, the transposition of those measurements to UHECR, where protons have a Lorentz boost factor of 10 10 -10 11 , was open to debate for example with the introduction of Lorentz Invariance Violation (LIV) mechanisms. Alternative scenarios calling for new particle species or new interactions, describing the sources as the collapses of topological defects or the decay of super massive relics came also into play.
The questions concerning the nature, the origin and the interactions of UHECRs became a central problem in particle physics and astrophysics. Large observatories such as the High Resolution Fly's Eye (HiRes [3]), The Pierre Auger Observatory (Auger [4]) and more recently the Telescope Array (TA [5]) have been constructed to answer those three fundamental questions. In the past 10 years, important progresses have been made. For example, "Top-Down" acceleration models are now disfavored. Invoking the decay of super massive big bang relics or the collapses of topological defects, these models were meant to explain the invisible origin of UHECRs and solve the acceleration problem. However, in general they predict a much larger fraction of photons and neutrinos at the highest energies than has been measured (for example by Auger [6]). Their case was weakened further by the observation of a correlation between the arrival directions of 27 cosmic rays and the positions of nearby Active Galactic Nuclei (AGN) from the Véron-Cetty and Véron catalog [7]. Together with the observed flux suppression at the highest energies these facts give credit to a conventional model of UHECRs: hadrons accelerated in astrophysical sources. Another example is the publication by the HiRes and Auger observatories of a cosmic ray energy spectrum showing a definite cutoff at the highest energies [8,9].
In 2007, the distance (75-100 Mpc) and the angular scale (2-6 degrees) of the correlation observed in the Auger data led us to believe that the primaries were mainly protons and that identification of the cosmic ray sources was at hand. However, the new data collected by Auger did not give us this opportunity. On the contrary the situation is today more puzzling (hence stimulating) than it was a few years ago. On the one hand, the high-energy data still show (at 99% confidence level) an anisotropic distribution, but with a correlation strength with the AGN that went down from 69% to 38% [10] 1 . On the other hand, the Auger data, in the range between 40 and 50 EeV, seem to indicate a rather heavy composition, as it is inferred from the measurement of the mean shower maximum in atmosphere (Xmax) and from the distribution of this position from one shower to another (RMS[Xmax]) [12]. This result, if extrapolated at higher energies and if hadronic interaction models that are used to convert Xmax or RMS[Xmax] into composition can be trusted, is hardly compatible with the observation of a correlation with the local distribution of matter on angular scales smaller than 10 degrees. Given the range of admitted values for the coherent part of the Galactic magnetic field (a few micro-Gauss on a few kilo-parsec scale), deflections of a few degrees are expected for protons and should be 26 times larger for iron nuclei of the same energy. If UHECRs were dominantly heavy nuclei, the observation of any anisotropy would be very unlikely.

UHECR 2012
The lack of more comprehensive measurements regarding in particular the nature (or composition) of the cosmic ray flux above 50 EeV strongly limits the coherent understanding of the available data and our capacity of addressing fundamental questions regarding particle interactions and transport at the highest energies, together with the nature of their sources. Even today, the existence of the GZK cut-off is still not totally resolved as the observed reduction in the CR spectrum can still be interpreted as the sources 'running out of steam' rather than the GZK prediction.
From the experimental point of view it is clear that only larger statistics and high-precision composition measurements at the highest energy (above 50 EeV) can shed some light on the situation. These measurements can hardly be made with the required statistics with the current setups in particular because of the low duty cycle of the current fluorescence detectors.

CAN WE DO MORE?
After their interaction with the atmosphere, ultra high energy particles produce EAS containing billions of secondary particles spread over several kilometers. From the ground, these showers are observed using essentially two techniques: -Telescopes detecting the Nitrogen fluorescence light emitted after the passage of the electromagnetic cascade. This technique provides a calorimetric estimate of the energy and composition information via the cascade evolution (mean position of the shower maximum Xmax, and fluctuation of this position RMS(Xmax) from a set of showers of fixed energy). However, it can only be used during clear moonless nights resulting in a 10% duty cycle. -Arrays of particle detectors sampling the particle content at ground. This technique has a 100% duty cycle but it is poorly apt to measure composition because it samples the cascade at a unique atmospheric depth. Moreover, it needs external information to calibrate its energy estimator. As I already mentioned, Auger and TA combine both techniques. However, in their current configuration, Auger and TA cannot do event-by-event analysis of the primary cosmic ray nature. This is, on the one hand due to the intrinsic shower to shower fluctuation of the position of Xmax which only allows for a statistical separation (unless one places hard cuts resulting in large statistic losses) and on the other hand to the low duty cycle of the technique which reduces this statistic of hybrid events at the highest energies to a handful.To compensate for these limitations one must do at least three things: (1) Increase the aperture as much as possible while maintaining the data quality, in particular the angular and energy resolutions. For the angular resolution our goal should be of the order of 1 • or less in order to stay at all energies below the expected dispersion of protons by the random (Galactic and extragalactic) magnetic field. For the energy resolution, 20% or less would ensure that one can select a reasonably clean sample of events above the GZK cut-off avoiding too much spill over of the low energy data in this very steeply falling part of the spectrum. Those are challenging numbers but certainly not out of reach as they correspond to the resolution of the current generation of arrays. (2) Increase the duty cycle for the observation of the electromagnetic evolution of the cascade.
In particular provide calorimetric energy and Xmax measurement for as many events as possible.
(3) Complement the measurement of the electromagnetic evolution with a second mass sensitive parameter that will allow for an event-by-event determination of the primary nature. An evident candidate is the muonic content of the cascade and in particular the muon densities at ground.
The requirement for the measurement of the muonic content of EAS imposes to include a ground array as this is the only way one can measure the muons. To comply with item (1) above, such array will have to be sparse. Using as a reference the Auger array, one can reasonably deploy and manage about 2,000 stations. For a triangular grid with a spacing of 3 km (twice as large as Auger) this would cover a surface 08018-p.3 of 15,000 km 2 with an aperture of nearly 50,000 km 2 .sr for a zenith angle cut at 60 • . To maintain the data quality one must compensate the larger spacing by increasing the collecting surface of each stations. A factor of 6 to 8 is appropriate given the steepness of the lateral distribution function. Such a factor also ensures that a high multiplicity of stations (5 or more) will record data even at 10 EeV. An artistic view of such a detector array is shown in Fig. 1. The second requirement is probably the most difficult to achieve. Dedicated R&D are ongoing to either improve on the standard fluorescence technique [13] or to develop new detectors based in particular on the radio emission of EAS [14,15]. One promising technique is based on the detection of the molecular bremsstrahlung radiation and is currently being heavily studied (see [14] and [16]). However it is yet unclear that such emission is sufficiently isotropic and sufficiently intense to provide with a 100% duty cycle the same information as the fluorescence technique. An alternative is the development of a fluorescence detector that would at least work with a partial moon. The duty cycle of fluorescence detectors is commonly quoted as being 10%. In reality the effective duty cycle (after quality cuts) is closer to 6 or 7% while the up time of the fluorescence detectors themselves is closer to 14%. Hence, in order to improve the overall efficiency, one needs not only to work on the up time fraction but also on limiting the impact of quality cuts, in particular the field of view cuts. A gain of a factor three, associated with a very large array, even if this is far from a 100% duty cycle, would make a noticeable difference together with the assurance of a very high measurement quality (see [13] for more details). Note that in all cases (radio or fluorescence) the "EM component detector" should be embedded within the particle array. Such association greatly simplifies the local triggering system, relaxes the noise reduction requirement and suppresses the need for imaging instrument (à la fluorescence telescope) since the geometry is provided by the particle array.

EPJ Web of Conferences
It is my personal opinion that the combined observation of the electromagnetic and the muonic component of EAS is the one goal that we must achieve and that makes the R&D effort for item (1) and (2) worthwhile. It is by measuring those two components that an event-by-event primary identification can become possible. A vertically segmented Cherenkov detector such as the one illustrated on Fig. 1 can in principle allow for measuring the muon signal at ground. The upper part of the tank acts as a scintillator where the muon and individual electron (or converted gamma rays) produce roughly the same signal, while the lower part is mostly sensitive to muons which will deposit about 10 times more energy than an individual electron. Comparing the time trace of the deposited signal (with a 100 to 200 MHz sampling rate) in the upper and lower parts will allow to separate the two components and calculate the muon densities. Of course, this is an indicative illustration only, and dedicated studies must be made to validate and optimize such design.

WHAT WILL WE GET?
In Fig. 2 the discriminating power of the combined measurement of Xmax and the muon densities at 1 km from the shower core (S ) is illustrated using a toy Monte Carlo at fixed energy (10 EeV). The resolution on Xmax is supposed to be dominated by the shower to shower fluctuation and the resolution on the densities at ground is supposed to be dominated by the Poisson noise. With the contemporaneous measurement of Xmax and S on an event-by-event basis, one can address with sufficient information the following fundamental questions.
Cross sections: In a simplified approach, shower development can be decomposed into two parts: a universal development profile, whose normalization is given by the primary particle energy, convoluted with the exponential distribution of the position of the first interaction point in the atmosphere. The characteristic length ( ) of this distribution is given by the primary mean free path in the upper atmosphere and is related to the cross section ( ) via = 1/( ) where is the air density. In this simplified view, fitting an exponential to the right hand side tail (deeper side) of the Xmax distribution directly gives and hence . Of course, this is a very basic analysis and one should also take into account the shower development characteristics to improve and limit biases in the estimates. Nevertheless it shows that an accurate measurement of Xmax on a set of showers selected according to their muon content will provide the necessary information to measure cross sections.
Mass composition and hadronic interaction models: All interaction models share the same basic principles, and predict a negative correlation between shower maximum and muon content. Hence, according to current models, proton showers are deep and muon poor, while iron showers are shallow and muon rich. An independent and simultaneous measurement of Xmax and S on a shower-by-shower basis will test this principle, whose potential violation would indicate the presence of new phenomena. More generally, the muon number and the depth of maximum are sensitive to the details in the models and to physical parameters such as cross section, multiplicity and inelasticity in the first hadronic interactions. Measuring both Xmax and S will help removing ambiguities between the interaction models and the composition dependences. Presently the mass composition of the primary particles can be obtained only from comparing air-shower data with simulations of the EAS development in the atmosphere. The present implementations rely on phenomenological models for hadronic interactions and cannot describe the observed air shower properties with the required precision. The predicted number of muons, using simulations with the common hadronic interaction models, is lower than measured. The high energy interactions are relevant for the shower development, while the particle production at low energies is relevant for the lateral distribution of particles on ground. In order to get 08018-p.5 good constraints on the hadronic interaction models, a concomitant measurement of the electromagnetic component and the muonic component of air showers is necessary.
Neutrinos and Photons: Surface arrays are sensitive to UHE neutrinos through their interactions, either in the atmosphere, or in the earth. The signature of a neutrino interaction is the detection at ground of a nearly horizontal and dominantly electromagnetic shower. This is the characteristic signature of a "young" development and hence of a deeply penetrating primary particle. On the contrary, nearly horizontal showers induced by protons or nuclei in the upper atmosphere have to cross a large amount of material before reaching the surface of the earth. In this case the ground particles are very dominantly high energy muons. To achieve a good rejection of this "hadronic" background, severe conditions have to be applied to the shape of the signals observed at ground. As a consequence, the sensitivity to neutrino interactions is restricted to a narrow layer of the atmosphere. The detection of Xmax would allow to better determine the depth of the interaction. This would permit to extend the range where neutrino induced showers can be unambiguously distinguished. In addition, the knowledge of the longitudinal profile and of the electromagnetic content of the shower will allow a reconstruction of the total energy deposited, which is related to the energy of the primary neutrino. If several neutrino interactions are observed, some information on the energy spectrum could be available and may be compared to the predictions of different models for their production. In a similar way, the detection of UHE photons, which carries information about the sources and the mechanism of production of UHECR, could be improved.
Test of fundamental laws: The precise measurement of the UHECR spectrum at very high energy can constrain the existence of a very small violation of Lorentz invariance. Among the different signatures, the presence of both a GZK effect and a recovery in the spectrum at higher energy is the clearest and most sensitive evidence. Such a signal must be distinguished from the presence of an UHECR component produced by "top-down" models. The measurement of the photon fraction in the cosmic rays at high energy can strongly constrain these top-down models. For the Lorentz invariance violation, the lightest component of the UHECRs can provide the most stringent bounds for all different theoretical models, since their Lorentz boost is larger at a given energy. Here again, the characterization on an event-byevent basis of the primary nature and the improved sensitivity to gamma primaries will improve on the constraints. Note, in addition, that the measurement of an unconventionally shallow shower with small muon content could be the direct signature of this invariance violation.
Astrophysical source identification: With the knowledge on an event-by-event basis of Xmax and S one will be able to study the correlation in the arrival directions of UHECRs with catalogs of nearby astrophysical sources using subsamples of showers according to their development characteristics. If the UHECR composition is mixed we would be in the position of separating the mixture and estimating the correlation parameters in both case. Moreover, if astrophysical sources accelerate both iron nuclei and protons in the same way, and with the same E/Z acceleration limit, this would link the energy of the primary cosmic ray to its mass. In this case we would see a correlation between the shower development -which is related to the primary mass -and its energy. This would be of great help in the understanding of the source identification and acceleration principles. In addition, the possible isolation of a low Z (charge) component would increase the power and constraints of correlation searches. Furthermore the correlation parameters and in particular the characteristic angular scale of the correlation would provide very valuable and robust constraints on Galactic and Intergalactic magnetic fields.

CONCLUSIONS
I argue that the best option for a future UHECR observatory is to develop a large (several 10 4 km 2 ) multicomponent sparse ground array. Detector elements of the array should associate an embedded "EM detector" providing fluorescence like measurement and a particle detector with adequate muon 08018-p. 6 UHECR 2012 identification capabilities. R&D are actively going on to open this potentially successful road. It is however clear that the successful construction of a next generation UHECR observatory will only come through a worldwide interest and resource effort. I am totally indebted to my fellow colleagues in Auger without whom none of the ideas presented above would have ever emerged.