Dark matter component decaying after recombination: lensing constraints with Planck data

It has been recently suggested~\cite{Berezhiani:2015yta} that emerging tension between cosmological parameter values derived in high-redshift (CMB anisotropy) and low-redshift (cluster counts, Hubble constant) measurements can be reconciled in a model which contains subdominant fraction of dark matter decaying after recombination. We check the model against the CMB Planck data. We find that lensing of the CMB anisotropies by the large-scale structure gives strong extra constraints on this model, limiting the fraction as $F<8\%$ at 2\,$\sigma$ confidence level. However, investigating the combined data set of CMB and conflicting low-$z$ measurements, we obtain that the model with $F\approx2\!-\!5$\% exhibits better fit (by 1.5-3\,$\sigma$ depending on the lensing priors) compared to that of the concordance $\Lambda$CDM cosmological model.


I. INTRODUCTION
The matter content of the Universe remains a mystery. Astronomical observations and cosmological data resolve at least three components in the matter sector: visible matter (baryons), dark matter (unknown electrically neutral particles) and neutrinos [2]. However, the dark matter may be easily a multicomponent itself. Indeed, most models of the dark matter and mechanisms of baryogenesis involve absolutely different physics, so that order-of-magnitude equality between the contributions of visible and dark matter to the present energy density of the Universe is interpreted as a chance coincidence. Then why not to have several different contributors to the dark matter sector?
To support this chain of reasoning, one can treat the neutrinos as a second to baryon component in the "visible" or ordinary matter sector. Both components are well recognizable in the cosmological data analysis. Similar situation may happen in the dark sector: there may be several components as well. Moreover, they can be potentially distinguishable. Furthermore, may be, the cosmological data already collected allow us to observe hints of the dark matter component whose behavior differs from that of the canonical Cold Dark Matter (CDM). Then, emerging discrepancies in fitting the ΛCDM cosmological model to the growing stack of cosmological data might signify the case.
Recent paper [1] considering a subdominant dark matter component decaying in the postrecombination epoch, is an excellent example illustrating the above idea. In this model the total dark matter amount gets reduced between the recombination and the present epoch, which affects the cosmological observables. In fact, the very model has been suggested to explain some emerging tensions between the cosmological measurements "at low redshifts" and "at high redshifts". Namely, it was argued in Ref. [1] that this model can explain low value of the Hubble constant, H 0 = 100 h km/s/Mpc, h = 0.6727 ± 0.0066 [3] extracted from the analysis of the Cosmic Microwave Background (CMB) anisotropy (Planck 2015, TT, TE, EE+lowP data) and high values of the same constant extracted from the cosmic ruler based on observations of astronomical Standard Candles, h = 0.738±0.024 [4], h = 0.743 ± 0.021 [5]. Simultaneously, the model may explain a tension between the cosmological constraints on σ 8 and Ω m from the CMB and from clusters as cosmological probes-the cluster count data prefer lower values of these observables.
Measurements of these parameters can suffer from (unknown) systematics, however, it must be unrelated for different observables and experiments. Hence the observation made in Ref. [1] may indeed be a hint of the multicomponent dark matter and deserves further study.
The idea of a Decaying Dark Matter component has a long history, starting apparently in the 1980s: see e.g. [6][7][8]. There are several varieties of it: CDM and a separate component decaying into invisible radiation with lifetime shorter than the age of the Universe, like in Ref. [1], a single unstable CDM with lifetime exceeding the age of the Universe, like in Refs. [9,10], a two component DM, where the heavier particles decay to lighter and invisible radiation, like in Refs. [11][12][13]. This setup is argued to be possibly useful in not only explaining the discrepancy between cosmological parameter estimates from CMB and matter clustering [1,9,12,14], Standard Candles [10,15], but also, say, in relaxing the tension of CDM predictions with observation of structures at (and of) small scales [11,12,14] and in understanding the origin of high-arXiv:1602.08121v2 [astro-ph.CO] 15 Sep 2016 energy neutrino IceCube events [16]. The analysis performed in the present paper for a particular variant of the model from Ref. [1] reveals an interesting impact of decaying component on CMB, which is of general nature and is therefore expected to be relevant for other models as well.
In Ref. [1] no real fitting to the Planck data has been done actually. Instead, to ensure that the model fits the CMB anisotropy, the Planck derived values for all primary cosmological parameters relevant at recombination were accepted and fixed. This guarantees that anisotropies produced at the last scattering in both models are identical. Furthermore, it was required that the angular diameter distance to the last scattering should be the same for all values of new parameters in the Decaying Dark Matter (DDM) model; namely, the sound horizon angle 100 * θ s was fixed to the Planck value as well. This guarantees that the observed CMB anisotropy spectra in DDM model are almost identical to those in the ΛCDM. The difference may appear only due to gravitational distortions of spectra between last scattering and present. Though such distortions are minor, they can be important with modern cosmological data.
There are two sources for these distortions. The first one is due to integrated Sachs-Wolfe effect. It causes somewhat higher values of anisotropy amplitudes C l at low multipoles l in DDM as compared to the Planck inspired ΛCDM. This is related to a larger values of cosmological constant Λ in the DDM assuming a flat Universe. By itself this distortion is not very significant and was effectively limited in [1] to levels below cosmic variance by additional fitting to the supernova data.
However, the second effect, the CMB distortion due to lensing by the large scale structures, was not considered in Ref. [1]. The difference in lensing power between the DDM and ΛCDM may be important since a part of the structure is decaying in the former model, and may be observable with high-quality data, such as Planck data.
To fill this gap, in the present paper we fit DDM of Ref. [1] to the complete Planck likelihood in order to understand the importance of corresponding lensing constraints. Our goal is to find out whether DDM may indeed reconcile cosmological measurements "at low redshifts" and "at high redshifts" and whether it provides better description of the Universe as compared to the ΛCDM model at the level of the current data. For this investigation we utilize the Planck 2015 CMB data [3,17], and the same constraints on the Hubble constant [4] and on σ 8 and Ω m derived from the Planck cluster counts [18] which was used in Ref. [1].

A. Decaying Dark Matter model
Two component DDM model has two extra parameters, fraction of decaying component in the total dark matter abundance, F , and its inverse lifetime, or width, Γ. To ensure a transparent transition to the case of stable matter, the fraction F is defined in terms of initial energy densities ω i ≡ Ω i h 2 of stable and decaying components in the following way F ≡ ω ddm /(ω sdm + ω ddm ). "Initial" here means the density which would have been measured if Γ = 0. Following Ref. [1] we also assume that the decay occurs into invisible massless particles (and does not produce too many photons) and normalize the width of the decaying component Γ to km/s/Mpc, i.e. it is measured in the same units as H 0 .

B. Cosmological data sets
To constrain this model we invariably employ TT,TE,EE Planck likelihood for the power spectra at multipoles l > 30, as described in [3]. By itself it already contains the effects of gravitational lensing of power spectra, which are most important for us here. Lensing reveals itself as smoothing of the acoustic structure in power spectra, so the peaks become lower while the troughs become higher.
We refer to the Planck measurements at low multipoles, l < 30, as "lowP" in notations of [3]. This likelihood also contains polarization data, which are crucial for us in what follows.
Since lensing is the main culprit for our investigation, we also employ direct independent Planck measurements of the lensing power spectrum C φφ l . It is computed from the Planck's maps using non-Gaussian (connected) parts of all 4-point correlation functions (e.g. TTTT, TTEE, etc.) [17]. To avoid confusion with the lensing extracted from TT,TE,EE we call the corresponding likelihood "4lens". This also highlights its origin in 4-point correlation functions. The Planck collaboration recommends using this lensing likelihood since it "constrains the lensing amplitude more directly and more tightly" [17].
For the low-redshift data sets, which are currently conflicting with the base Planck ΛCDM cosmology, we use the same data sets as in Ref. [1]. Namely, we use a direct astrophysical measurement of the Hubble constant by Riess et al. [4], and indicate it as H 0 in the descriptions of data combinations. As for the data on the galaxy cluster counts, we adopt and add Planck results [18]. We refer to this Planck data set as "CL." In our discussion of the main results, we always employ the following block of data (TT, TE, EE + H 0 + CL). However, lensing amplitude which is contained here is conflicting with the one from 4lens, see [3,17]. Therefore, we consider this block in three different combina- tions with lowP, 4lens and lowP+4lens, as summarized in Table I. Tags for these combinations reflect main relevant differences between them.

C. Numerical procedure
All relevant cosmological calculations have been carried out numerically using the CLASS Boltzmann code [19,20]. The parameter space is explored using the Markov Chain Monte-Carlo technique with the Monte Python package [21]. The two-component DDM model is predefined in the both numerical tools. Eight primary cosmological parameters have been varied. Out of eight, two are specific for the DDM model: the fraction F and the width Γ. The remaining six parameters are standard: the angular size of the sound horizon r s at lastscattering θ * ≡ 100 × r s (z * )/D A (z * ), the baryon density ω b , initial CDM density ω cdm = ω sdm + ω ddm , the optical depth τ , the squared amplitude A s and tilt n s of the power spectrum of primordial scalar perturbations. In the numerical codes we run the perturbations at the linear regime only. We have checked (by switching on the corresponding option in CLASS code) that our main results change very mildly with account of the nonlinear corrections to the lensing potentials, which happens to be most important in testing the model with CMB data, as we show below. Nonlinear contributions to the matter power spectrum P (k), calculated with CLASS, are below 1% at the scales relevant for the estimates of parameter σ 8 . We are planning detailed study of these and other delicate effects associated with the nonlinear evolution in a forthcoming paper. In this study we take the Universe to be spatially flat, neglect the possible tensor perturbations and put the sum of active neutrino masses equal to m ν = 0.06 eV assuming a nondegenerate normal hierarchy pattern.

A. Planck data only
First, we would like to visualize the important role of lensing effects in TT power spectra. For this we plot the difference between predictions of DDM (F = 0.1, Γ = 2000 km/s/Mpc) and of ΛCDM models, while taking other parameters being fixed by the best fit to the TT, TE, EE + lowP data set. This difference is shown in Fig. 1 by the solid curve. The best-fit ΛCDM model spectrum is also subtracted from the data, residuals are shown by dots with error bars. We see that the difference between DDM and CDM is appreciable at the level of presently achieved precision and, therefore, lensing should be included in constraining the DDM models. We also see that while generically the lensing is accounted for in both models (i.e. residuals are small), the agreement with data is not perfect even for the base ΛCDM model: the data points after subtraction oscillate coherently in the vicinity of zero. Somewhat more lensing power is required to fit the data as compared to the theoretical prediction in the ΛCDM model, the disagreement is at 2σ level [17]. While the amplitude of the difference between the DDM and ΛCDM (solid curve) is comparable to the deviations of residuals, it is out of phase. This reflects even weaker lensing power in DDM since the large-scale structure is decaying at late times. As a result, fitting to TT,TE,EE Planck likelihood alone restricts DDM to the range F < 0.07 at 2σ level. This is the key observation missed in Ref. [1].
By itself, lowP likelihood does not gives strong constraints. However, the situation is little bit more tricky and its role is important together with TT,TE,EE. It works as follows. The lack of lensing power in theoretical predictions to match TT data pushes the fit to a higher amplitude of primordial spectra, A s . To compensate for a simultaneously growing amplitude of C T T l , this in turn requires larger optical depth, τ , but the latter is limited by polarization data in the lowP likelihood. As a result, DDM is even more limited in the TT, TE, EE + lowP likelihood, F < 0.04 at 2σ level.
The situation with the lensing power spectrum C φφ l is opposite. Theoretical predictions based on the ΛCDM model also disagree with the Planck data here, but now less power is needed to explain data [3,17], i.e. this direct lensing power spectrum slightly favors DDM as compared to ΛCDM, see Fig. 2. As a result, upper boundary for F gets little bit more relaxed, F < 0.08 at 2σ level.

B. Planck data and conflicting low-z measurements
Now we combine Planck data with conflicting low redshift measurements of H 0 , Ω m and σ 8 . Since DDM is restricted by lensing, and the lensing acts in the opposite directions in TT,TE,EE and C φφ l likelihoods, we scrutinize the model using three data combinations listed in Table I.
Corresponding constraints on DDM parameters and on conflicting cosmological parameters H 0 , Ω m and σ 8 are presented in Figs Table I ation, where a part of the DDM particles survive in the late Universe. In particular, at Γ H 0 the DDM is indistinguishable from the stable DM. The region of small Γ is not resolved in our figures and deserves special study beyond the scope of this paper. In Fig. 4, the allowed at 2 σ regions with highest values of F and smallest values of H 0 map in Fig. 3 to the regions with longer lifetimes, Γ 1000 km/(s Mpc). In Fig. 5 the regions of highest allowed values of σ 8 and lowest values of Ω m map in Fig. 3 to the region of 1000 km/(s Mpc) Γ 5000 km/(s Mpc) and the highest allowed values of F .
We also observe that the proper fit to CMB data reveals a significantly smaller fraction of DDM as compared to the results of Ref. [1], and the favored model parameter values in Fig. 3 are well outside the 2 σ-region presented in Ref. [1]. However, our fits still indicate nonzero F , and absence of the decaying component is disfavored.
To understand quantitatively which model (ΛCDM or DDM) is preferable according to the cosmological data, we compare the differences in logarithmic likelihoods log L calculated for these two models in their respective best-fit points for the same data sets. Each difference 2 · ∆ log L is distributed as χ 2 with an effective number of degrees of freedom equal to the difference in the number of fitting parameters in these two models, which is 2, corresponding to two extra parameters F and Γ in DDM.
Resulting improvements of the DDM over the ΛCDM are displayed in Table II.
There is an improvement of the DDM over ΛCDM, and hence the DDM indeed describes the cosmological data better, as suggested in Ref. [1]. However, the improvement is not very significant because of the key disagreement between theory and CMB measurements displayed in Fig. 1. It enters all our data sets, and is worse for the DDM than for the ΛCDM.
In principle, the corresponding constraints, when strengthened, can rule out DDM, but currently they may be judged as rather harmless. Indeed, it has to be understood first why ΛCDM is also in tension with the lensing here, and then the disagreement has to be resolved. Before that, it is unreasonable to make strong conclusions in either direction.

IV. CONCLUSIONS
We confirm that DDM provides a better description of the CMB and low-z measurements of cosmological parameters. However, the fraction of the DDM is much smaller than previously claimed [1].
Namely, we have found that lensing of CMB anisotropies by large-scale structures, contained in the Planck data, strongly constrains the two-component DDM model. Essentially, lensing is measured twice. First, as smoothing of the acoustic peaks in TT power spectra and, second, directly in the lensing power spectrum C φφ l . Both measurements are slightly conflicting with predictions of the ΛCDM model. And, in a sense, conflict between them can be also considered as conflict between low-and high-z measurements. ΛCDM predicts smaller lensing power as compared to what is required by TT+lowP power spectrum results and larger lensing power compared to C φφ l results. Therefore, the former likelihood strongly restricts DDM, while the latter actually favors. This conclusion is rather generic and CMB measurements themselves must constrain other models with DDM component, see Introduction, as well.
Then, with this conflict in the backyard, we have analyzed whether DDM is able to reconcile the Planckinspired H 0 , Ω m , and σ 8 values with their conflicting low-redshift measurements. Improvement of DDM over ΛCDM is observed, but it is not very significant with the current data. We feel that for the final verdict it is highly important to understand the source of the "lensing conflict" in the Planck data.