Uncertainties in climate change projections, from the global to the regional scale

A discussion is presented of the different sources of uncertainty in the production of climate change projections at the global to the regional scale. In particular the following uncertainty sources are identified and discussed: greenhouse gas (GHG) emission/concentration scenario, model configuration (or intra-model) and bias, internal unforced variability due to the non-linearities of the climate system, and downscaling uncertainty. Specific examples are presented to intercompare the importance of these sources of uncertainty, which depends on different factors, such as the time horizon of the projection, the variable under consideration and the scale of interest. In general, scenario and model configuration uncertainty dominate for long term climate change, especially at the global scale. The contribution of internal variability increases for near term projections and for higher order climate statistics. Downscaling uncertainty is significant for variables primarily affected by local processes, such as summer convective precipitation. It is argued that because of these sources of uncertainty, the climate prediction problem should be addressed in a probabilistic, rather than deterministic way. The discussion is placed within the context of the identification of two categories of uncertainty source, the Knowledge Uncertainty due to our imperfect knowledge and representation of the problem, and the Intrinsic Uncertainty inherent to the problem. While the former should be reduced with improved science, the latter should be characterized to the largest possible extent to account for all possible outcomes.


Introduction
Projections of climate change for the 21 st century at the global to regional scale in response to increased emissions of greenhouse gases (GHG) are necessary in order to assess the impacts of GHG-induced global warming and to develop suitable adaptation and mitigation response strategies. Climate change can occur not only because of anthropogenic forcings, e.g. increased atmospheric GHG and aerosol concentrations, but also because of natural forcings, e.g. changes in solar activity, and/or natural unforced variability of the climate system. All these anthropogenic and natural factors, along with the uncertainties that characterize them, need to be accounted for in producing future climate projections. In addition, climate projections are produced via a range of modeling tools, from coupled Atmosphere-Ocean Global Climate Models (AOGCMs, [1]) to statistical and dynamical downscaling techniques (e.g. Regional Climate Models or RCMs, [2]). These tools are also affected by substantial uncertainties related to our imperfect knowledge and description of relevant processes in the climate system.
It is thus clear that multiple sources of uncertainty are present in the production of climate change projections for the 21 st century. They compound in the cascade of steps involved in generating the projections and their full characterization is a key element of the climate change problem, since it is 2 Sources of uncertainty in climate change projections As mentioned, climate can change due to a number of anthropogenic and natural factors. Among the main anthropogenic factors are atmospheric GHG, tropospheric aerosols due to pollution emissions (e.g. sulphates, nitrates, organic aerosols), and changes in land-use (e.g. deforestation, agricultural practices). GHGs affect climate by absorbing the infrared radiation emitted from the Earth's surface (the greenhouse effect), aerosols can absorb and scatter solar and infrared radiation and can affect the optical and microphysical properties of clouds (direct and indirect aerosol effects, respectively), while the land surface affects the Earth-atmosphere exchanges of momentum, heat and moisture. The radiative forcing due to GHGs acts at the global scale, since GHGs are long-lived and thus their concentration is relatively uniform across the atmosphere. Conversely, the forcing due to short lived tropospheric aerosols and land-use change is most effective at the regional to local scale.
Among the main natural forcings considered in climate simulations are major volcanic eruptions, which can result in the injection of small aerosol particles into the stratosphere where they can remain for months to years, and changes in the radiation emitted by the sun and received by Earth. In addition, being a highly non-linear system with interactions among components characterized by different response time scales, the Earth's climate is affected by pronounced internal variability at scales form daily to multi decadal. Typical examples of such variability are the El Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) or the Atlantic Multidecadal Oscillation (AMO). Figure 1 shows the sequence of steps that are usually undertaken to produce a climate change projection at global and regional scales. The first step consists in the generation of scenarios of GHG and aerosol emissions based on hypotheses of future socio-economic and technological development. These emission scenarios are then turned into GHG and aerosol concentration scenarios with the use of biogeochemical models. The GHG/aerosol concentration scenarios are the fundamental input to AOGCMs to produce global climate projections [1]. These global projections can then be downscaled to the regional/local scale with the use of dynamical downscaling (e.g. regional climate models or RCMs [3]) or statistical downscaling [4] methods. The information obtained from this last step can finally be used for impact and adaptation studies, or can be employed to assess the need of mitigation (i.e. GHG emission reduction) options. Each step of the process described in Fig. 1 is affected by a certain level of uncertainty, which is compounded to that of the next step in a cascade process that results in an overall level of uncertainty in the projection.
We can now analyze step-by-step the different sources of uncertainty. The first step, emission scenarios, is perhaps the most uncertain of all. It is indeed essentially impossible to predict what will be socio-economic and technological development over the next century that will lead to different emission pathways. This source of uncertainty thus falls under the category of "intrinsic" and essentially will never be eliminated. Rather it should be characterized to the maximum extent possible by looking at different possible future socio-economic pathways. This has been done via the development of emission scenarios, i.e. future emissions based on hypotheses of socio-economical and technological development. The Intergovernmental Panel on Climate Change (IPCC) has developed a series of such scenarios [5], going from low to high emission ones ( Fig. 2(a), [5]) which are intended to cover the range of possible future developments. It is important to note that IPCC did not attach any probability to these scenarios, so that they are all considered to be equally plausible.
These different GHG emission scenarios are then fed into global bio-geochemical cycle models to produce corresponding GHG concentration scenarios (Step 2, Fig. 2(b) [5]). The biogeochemical models are obviously affected by uncertainties due to poor knowledge of the biogeochemical cycles and approximate representation of relevant processes. This uncertainty compounds to that due to the range of scenarios to produce a range of GHG concentration pathways for the 21 st century. The knowledge uncertainty in the biogeochemical models is of the "bad" type and should/could be reduced by improving the science behind these models.
Once the GHG concentration scenario pathways have been produced, they are fed into AOGCMs to produce 21 st century climate projections (Step 3). The term "projection" derives from the fact that what climate models essentially do is not a climate prediction, but a simulation of the response to the climate system to a certain scenario of GHG concentration increase (tied to a scenario of socioeconomic development). In other words a climate projection is a sensitivity experiment of the response of the climate system to a given increases in GHG levels.  It is worth briefly describing how this is actually done (Fig. 3). An AOGCM is composed of coupled components characterized by different equilibrium, or dynamical response, times, such as the atmosphere, oceans, cryosphere, chemosphere and biosphere. The atmosphere is a fast response component and equilibrates after a few months to years. Conversely the ocean is a slow component due to the high heat capacity of water, and may take centuries to millennia to actually reach full equilibrium. The water content of soil also has a similarly long response time. Therefore, a long integration time is required for an AOGCM to reach full equilibrium across its components. This equilibrium is needed in order to avoid possible drifts in the modeled climate system. Therefore, an AOGCM is first run with pre-industrial GHG conditions for a multi-century period necessary for all of its components to reach equilibrium. This run is usually referred to as "control" simulation (see Fig. 3). Once equilibrium is achieved a certain time is randomly picked as representing, for example, the beginning of the industrial ERCA 9 119 revolution (e.g. nominally the year 1850). Starting from this time, the GHG concentration is increased based on historical reconstructions of GHG pathways.
This historical period generally reaches a year in the early 2000s (for the next generation simulations this is set at 2005). During this period GHG and other anthropogenic forcings (atmospheric aerosols and for some cases land-use change) and natural forcings, i.e. known volcanic eruptions and solar variations, are input into the models. The AOGCM is thus run for the historical period (1850-2005) using all known anthropogenic (GHG, aerosol, land-use) and natural (volcanoes, solar activity) forcings. Once the end of the historical period is reached, the projection period starts (e.g. 2005-2100 and possibly beyond) with inclusion of the GHG and aerosol concentration scenarios. In some cases, but to date not systematically, changes in land-use are also included based on the specific scenarios.
It is important to stress that the initial time of the historical period is not the real 1860 in the sense that the atmospheric and oceanic conditions are not those found in that particular year (which are not know sufficiently well). Only the GHG/aerosol concentrations correspond to those estimated for that year (i.e. pre-industrial levels) and are subsequently changed following historical reconstructions. The choice of the initial starting point (and thus initial state) of the historical run might affect the simulation due to the long memory of the oceans and land surface, and the internal variability and non-linear interactions of the climate system. This adds an element of uncertainty which is typically explored by performing ensembles of transient simulations (1850-2100) starting at different times in the control period. The range of outcomes reached within this ensembles provides a measure of the uncertainty related to the internal variability of the climate system as simulated by a given model. Now, each climate model has different representations of dynamical and physical processes. This means that it will generally respond differently to the same GHG forcing. In other words, two different GCMs can provide different global warming values for the same increase in GHG concentrations. This response is usually measured by the concept of "climate sensitivity" which is defined as the global temperature response of a model to doubling of GHG concentrations. A wide range of climate sensitivities is found in today's global models, about 1.5 to 4.5 K, and this source of uncertainty (hereafter referred to as "model configuration" uncertainty) represents one of the largest contributions to the overall uncertainty in climate projections. It is an uncertainty due to our poor knowledge and model representation of the climate system, particularly cloud processes, convection and the hydrologic cycle, which should/could in principle be reduced by improving the models and the observing systems used to evaluate them. It is interesting to note that the range of climate sensitivity has not decreased in the last few generations of AOGCMs despite their enhancements in the representation of processes and their improved performance. Another source of uncertainty, which we refer to as "bias" uncertainty, is related to the presence of systematic model errors. Indeed, in principle it can be expected that model errors might affect the simulated changes. This is obviously a Knowledge type uncertainty that needs to be decreased.
The resolution of current AOGCMs used in scenario simulations (∼ 100 km) is still too coarse to produce the fine scale information needed for impact assessment studies. Therefore, this information can be downscaled to fine scales via a range of techniques based on the use of global uniform and variable resolution atmospheric models, RCMs, and a wide spectrum of statistical downscaling approaches [2]. All these downscaling techniques are driven by climate fields from the AOGCMs. Downscaling adds a further element of uncertainty related to the choice of downscaling approach and downscaling model. In addition, the downscaling approaches are also characterized by an internal variability, although this is comparatively small due to the constraints of the forcing fields from the global models.
We can now summarize the main sources of uncertainty in climate change projections as follows; GHG (and aerosol) emission and concentration scenario uncertainty; AOGCM model configuration, bias and internal variability uncertainty; Downscaling approach and configuration uncertainty. In the next section we will explore how such uncertainties can be quantitatively represented when producing a climate change projection. It should also be mentioned that future climate projections do not make any assumptions about possible occurrence of major volcanoes and future changes in solar activity since these are extremely difficult, and in fact impossible, to predict based on current knowledge. In other words, natural forcings are assumed to be constant in the future. This adds another factor of uncertainty which is however extremely difficult to assess.

Representing uncertainty in climate change projections
In the previous section we have seen that climate change projections are affected by a cascade of several sources of uncertainty. How can this uncertainty be accounted for and represented when applied to impact assessment studies? The presence of "Intrinsic" uncertainty sources, such as the scenario and internal variability ones, essentially precludes a deterministic approach to the problem [6]. The unpredictable nature of future socio-economic and technological development and the non-linearities in the climate system (which determine its internal unforced variability), are such that it is essentially impossible to exactly predict what the climate of the 21 st century will be [6]. This is the case even if we had perfect climate models and observing systems. The imperfect knowledge in present day models and observation systems further adds to preventing a deterministic climate prediction.
This implies that the problem of climate change prediction has to be approached in a probabilistic way, by which we can evaluate the range of possible outcomes and assign to each outcome a certain probability to occur. From the technical point of view this can be achieved by producing Probability Density Functions (PDFs) of future climate (or climate change) variables. The width of the PDF (for example its standard deviation) is a measure of the overall uncertainty in the projection and the PDF can be used in risk-based impact assessment studies in conjunction to the calculation of the costs (in the broad sense) of the impacts. The real change that the Earth's climate system will experience will then be one realization hopefully falling within the PDF of the predicted outcomes.
The interesting question could be asked of whether it would be desirable to provide a relatively narrow or large PDF for application to impact studies. Again, this is an issue of Intrinsic vs. Knowledge uncertainty. Better research should lead to a reduction of Knowledge uncertainty, and thus a narrowing of the associated PDFs, e.g. due to improvement in models and observations along with a better understanding of processes. However, better research could also lead to an increase of Intrinsic uncertainty, and thus a broadening of PDFs, by a more complete exploration of all the possible outcomes, particularly in the tail of the distribution where low probability, high impact events are located. This illustrates well the apparent ambiguity of the uncertainty concept, by which unwanted uncertainty needs to be reduced, but intrinsic uncertainty needs to be fully explored, and thus possibly increased.
How can PDFs of climate change variables be generated? The most direct approach would be to produce a large enough number of simulations to sample the full uncertainty space. However, this space is multi-dimensional, since simulations would be required to cover multiple emission and/or concentration scenarios, AOGCMs, realizations with the same AOGCM, downscaling approaches and models within each downscaling technique. This can easily lead to a multi-dimensional matrix of experiments including an unmanageably large number of climate change simulations [7]. Techniques need therefore to be developed to assess the uncertainty range in climate projections based on an unfilled and sparsely populated simulation matrix.
One of the approaches that have been used to address this issue is to sample the uncertainty space by including members in the matrix that sample the full estimated range of the uncertainty space, for example AOGCMs with a wide range of climate sensitivity. A first order measure of the uncertainty range is then simply the inter-model spread of simulated outcomes. In this case each outcome in the spread is implicitly given the same probability to occur, i.e. a flat PDF is assumed. When regionally downscaling AOGCM information, AOGCMs with the largest and smallest climate sensitivity could then be used to drive nested RCMs. This would provide the bounds of the downscaled outcomes. The points in between these bounds can be filled through the interpolation of spatial patterns of climate change variables from available simulations, assuming these spatial patterns do not change drastically for the different cases. This assumption has been shown to work well for variables such as mean temperature and precipitation [8,9], however it may be less suitable for higher order statistics such as variability and extremes. The approach described here was followed in the European project ENSEM-BLES (http://ensembles-eu.metoffice.com/) in which relatively small ensembles of GCM and RCM simulations were utilized to produce PDFs of climate change over the European region [10].
Other studies have limited the exploration of the uncertainty space to individual dimensions through the completion of large ensembles of simulations for that dimension. For example Murphy et al. [11] produced PDFs of global temperature change based on a relatively large ensemble of AOGCM runs (order of hundreds) in which the physics parameters of the model were changed within realistic bounds Fig. 4. Example of global temperature change PDF produced via many thousands of simulations with a simple climate model (from [12]).
("perturbed ensemble physics" approach). Wigley and Raper [12] and Solomon et al. [13] generated PDFs of global warming through the use of simplified global climate models which, because of their simplicity, can be run to produce very large ensembles (Fig. 4). This approach has indeed allowed the exploration of multiple dimensions, for example emission scenario and global climate sensitivity, by completing many thousands of simulations. Similarly, the Climateprediction.net project has lead to the completion of tens of thousands of coarse resolution global model projections on computers worldwide which has allowed the generation of climate change PDFs down to the sub-continental scale [14], [15]. Finally, various statistical techniques, for example using Bayesian approaches, have been proposed to produce climate change PDFs based on relatively small ensembles of models (e.g. [10,[16][17][18][19][20]).
One of the issues that is closely related to uncertainty is that of model weighting. Traditionally, when using a model ensemble for producing climate change projections, each model has been treated equally. It is possible, however, that some models might be more "credible", or "reliable", than others so that their information should have a greater weight when producing projections. Model weighting generally leads to a reduction of unwanted uncertainty because the information used is associated to a reduced number of more credible "effective" models. This notion has lead to a series of recent studies on how to calculate and assign weights to climate models.
The first attempt at climate model weighting and its application to the production of climate change PDFs was presented by Giorgi and Mearns [16,21], who developed the "Reliability Ensemble Averaging (REA)" method. In their method, model weights were based on two criteria, the model performance in reproducing present day climate ("model performance" criterion) and the level of model disagreement from the rest of the ensemble ("model convergence" criterion). The latter criterion was abandoned in an augmented version of the method because it artificially reduced the estimates of uncertainty [20]. In the original REA method the performance criterion was essentially based on the model biases in reproducing present day climatologies (e.g. for temperature and precipitation).

EPJ Web of Conferences
Xu et al. [20] later expanded the method by basing the weights on performance metrics including multiple variables and multiple statistics. The performance and convergence metrics of Giorgi and Mearns [21] were also used in the Bayesian method of Tebaldi et al. [17], while more recent weighting systems are mostly based on compounded performance metrics in reproducing different characteristics of present day climate [11,22].
Although model weighting is increasingly used as a tool to improve the assessment and quantification of uncertainties, it also has some important limitations that need to be stressed. First is the unavoidable subjective nature of the weighting approach. Because of the extremely large number of degrees of freedom in the climate system, it is essentially impossible to devise an omni-comprehensive universal performance metric. Therefore, a choice of performance metrics is unavoidable, as is the way the metrics are combined into producing a weighting function. Second, performance based weighting assumes that better fidelity in representing present day climate implies greater credibility in the projection of future climates. This is certainly a necessary condition, but not obviously a sufficient one because often model parameters are tuned to reproduce present day climate but may respond in an unrealistic way to climate conditions different from present. In addition, this condition assumes that the climate change signal depends on the model systematic errors, i.e. on the bias uncertainty. If this is not the case model weighting according to performance metrics will not lead to a significant improvement of the projections. Because of all these drawbacks the issue of model weighting is highly controversial and in fact a clear advantage of using model weights has not yet been unambiguously demonstrated.
Another relevant technique within the context of climate change projections is the so called pattern scaling [8]. Pattern scaling is based on the notion that regional changes in some climatic variables are directly related, and in fact sometimes linearly related, to the global temperature change. This notion is supported by previous studies (e.g. [8,9]) that indeed found such linear relationship for quantities such as mean temperature and, to a lesser extent, precipitation. If such relationship exists, then it can be applied to PDFs of global temperature change (e.g. obtained from large ensembles with simple climate models) to retrieve PDFs of regional changes (e.g. [9]). Pattern scaling has indeed proven to be a surprisingly powerful and general tool for change in mean variables, however its applicability to higher order climate statistics (variability and extremes) is less robust and appears more questionable in view of the greater effects of non-linearities on such statistics. Despite these limitations, pattern scaling might provide a suitable first order and inexpensive approach to produce PDFs of regional climate change based on global warming PDFs.

Illustrative examples of uncertainties in climate prjections, from the global to the regional scale
We learned from section 2 that the primary sources of uncertainty in climate change projections are associated with emission scenarios (scenario uncertainty), model configuration (configuration, or intermodel uncertainty) and systematic biases (bias uncertainty), internal variability of the climate system and, when used, downscaling (dynamical or statistical). In this section we will try to assess the relative importance of these sources of uncertainty in different contexts.
Starting with temperature, Fig. 5 first shows the 21 st century global warming projections by the ensemble of global models used in the IPCC Fourth Assessment report [13]. Two sets of projections are shown for different emission scenarios. The colored shaded areas show envelopes of the transient projections obtained from AOGCMs (the so called CMIP3 ensemble [23]), three emission scenarios (low end B1, intermediate A1B and high end A2) and a scenario in which GHG concentrations are held constant at their 2000 value (or "commitment experiment"). The grey bars indicate the envelope of global warming projections by 2100 obtained from large ensembles of simulations with intermediate complexity climate models for six GHG emission scenarios. The width of the colored curves essentially measures the spread of the AOGCM simulations and it is thus a measure of the model configuration uncertainty. It can be seen that, for the set of the CMIP3 AOGCMs this is of the order of 1 • C by the end of the century. By comparison, the inter-scenario spread for the three scenarios analyzed is larger, order of 2 • C. The scenario uncertainty appears thus dominant for the end of century global warming in the CMIP3 ensemble. This is not the case for the first portion of the century, up to about 2030, when all the scenarios show similar trends well within the inter-model spread.
When looking at the larger simplified model ensemble (grey bars), which includes a better exploration of the effects of parameter variations in the models, we see that the inter-model uncertainty spread actually increases and in fact has a range of ∼ 2 • C for the B1 scenario to ∼ 4 • C for the most extreme scenario (A1FI). Compounding all the information in Figure 5, we can conclude that the range of uncertainty in global warming by the end of the 21 st century is about 5 • C, i.e. projections vary from ∼ 1 to ∼ 6 • C of global warming (these being the minimum and maximum values reached by the gray bars) and that the inter-model and scenario uncertainties have roughly comparable contributions to this uncertainty range (the former showing a somewhat larger contribution for the larger ensemble of simplified model simulations). Figure 5 also shows that the contributions of the different uncertainty sources vary with time in the 21 st century. Early in the century the concentration scenario pathways and global warming response do not differ much from each other, so that the inter-model uncertainty dominates, while later in the century the scenarios increasingly diverge, providing a more substantial, if not dominant, contribution to uncertainty.
As mentioned, a third source of uncertainty in global warming projections is associated with the internal variability of the climate system. The contribution of this uncertainty source can be assessed by carrying out ensembles of simulations with the same global model and different initial conditions, and studies have shown that, in general, this source of uncertainty is relevant in the early portion of the 21 st century and very rapidly becomes negligible for later decades [24].
As we move from the global to the regional and local scale the picture changes substantially. The spatial and temporal variability of variables such as temperature and especially precipitation tends to increase for finer spatial and temporal scales [25]. This implies that the contribution of internal variability is expected to increase at smaller scales and as we consider shorter temporal scales (say decades instead of multidecadal periods). The relative importance of the uncertainty due to scenario, model configuration and internal variability, both at global and regional scales, has been comprehensively assessed by Hawkins and Sutton [24] for decadal mean temperature using an analysis of CMIP3 global projections. This was done by analyzing a matrix of simulations with different scenarios, model configurations and initial conditions. Figure 6 adapted from their work summarizes their results for surface air temperature, both globally and over the British Isles. At the global scale, configuration and internal variability dominate for near term projections, while scenario uncertainty is negligible. The scenario uncertainty becomes quickly important, if not dominant, for long term, end of century projections, when the contribution of internal variability is negligible. The model configuration uncertainty remains relevant at all projection time scales. Going to the regional scale, the contributions of model configuration and especially internal variability generally increase, in particular for early century projections, and comparatively the contribution of the scenario uncertainty decreases.
Precipitation, another variable crucial for impact assessments, exhibits a much more complex spatial and temporal variability than temperature and its response to global warming depends critically on the response of regional circulations to the GHG forcing. In addition, regional and local forcing can heavily modulate the precipitation change signal [3]. For this reason precipitation projections are characterized by much higher uncertainty than temperature projections, especially at the regional scale. For example, while at the global scale precipitation is expected to increase in response to the increased evaporation and atmospheric water content of a warmer world at the regional scale the sign of the precipitation change signal actually depends not only on the atmospheric moisture content but also on changes in circulation features (e.g. storm tracks and monsoons) and local effects (e.g. topography and land use). For example, precipitation is generally projected to increase over high latitude regions and decrease over many sub-tropical regions in response to a poleward shift of mid-latitude storm tracks under increased GHG forcing. Figure 7 shows an example of global precipitation changes simulated by a set of different global models for the same emission scenario. All models project an increase in global precipitation, however this increase ranges from about 1% to 9% by the end of the 21 st century. This can be taken as an approximate measure of uncertainty in global precipitation projections. When moving to the regional scale this uncertainty increases. Two examples are provided in Fig. 8, which shows seasonal mean temperature vs. precipitation changes over regions of sub-continental size with the latest set of CMIP3 models for the A1B scenario. Over the Mediterranean region in summer (Fig. 8(a)) all models agree in projecting a decrease in precipitation, however a large spread in this decrease is found, from a few % to -40%. Therefore although the qualitative projection of precipitation reduction may be deemed as relatively robust in the CMIP3 ensemble, its quantitative estimate appears highly uncertain. By comparison, Fig. 8(b) presents the inter-model spread of monsoon season temperature vs. precipitation change projections over the West Africa region. In this case, the models are almost equally divided in projecting increases and decreases of precipitation, so that the sign of precipitation change is highly uncertain. On the other hand, the actual spread is not large, so that Fig. 8(b) could be interpreted as providing a relatively low uncertainty of a no substantial GHG-forced change in monsoon precipitation over West Africa. Overall, it is clear from Fig. 8 that the spread of precipitation change estimates (and thus inter-model uncertainty) can be much larger at the regional than global scale, with cases in which even the sign of the change is highly uncertain.
We should also emphasize that projections such as those in Fig. 8 need to be interpreted within the context of the relatively large multidecadal variability of regional precipitation [26]. In other words, the spread we observe in the different model results of Fig. 8 may be due not only to the response of the different models to the GHG forcing, but also to the internal variability of each model, by which a relatively dry or wet period might be simulated at the end of the 21 st century essentially by chance and not as a result of the GHG forcing. By the same token the evolution of the real climate at the end of the 21 st century might lead to drier or wetter conditions compared to present as a result of large regional variability and not because of a response to forcing. Disentangling the natural variability from the forced signal is more difficult for precipitation than temperature, as it requires a large sample size to filter out the variability signal. In general, we can state that the uncertainty in the regional aspects of the hydrologic cycle is relatively high, and thus it is one of the main issues to be addressed when applying climate change information to impact assessment studies.
Concerning the bias uncertainty, in a recent paper Giorgi and Coppola [27] studied the dependence of the model projected changes in a given variable (temperature or precipitation) on the corresponding model biases for the CMIP3 ensemble over 26 regions of sub-continental size. They calculated the inter-model correlation between changes and biases, with the implication that for the cases in which a significant correlation is found the model bias affects the change. It was found that for temperature there was no dependence of the change on the bias, implying that regional temperature changes are essentially regulated by the global model climate sensitivity. For precipitation a number of cases showed a significant bias-change correlation, but not the majority. Giorgi and Coppola [27] thus concluded that, based on this very limited and simple analysis, the model bias does not strongly affect the projected change down to the sub-continental scale, which implies that for the statistics and variables analyzed the bias uncertainty is likely of secondary importance compared to the other sources. More work is however needed to support more generally this conclusion.
Following now the chain of uncertainty cascade of Fig. 1, AOGCM fields can be downscaled using both dynamical (e.g. RCM) and empirical/statistical methods to obtain finer scale regional to local climate information. This step adds to the uncertainty cascade, but what is its relative contribution? Since in the downscaling process the forcing fields form the AOGCM provide a strong constraint, e.g. when providing lateral boundary conditions for RCM simulations, it might be argued that this contribution is comparatively small. Under this hypothesis, for example, using two RCMs to downscale the same GCM would only add a small contribution to the overall uncertainty (e.g. compared to that of using different GCMs). To test this hypothesis, a suitable matrix of GCM-RCM experiments is needed.
Such an uncertainty estimate was conducted as part of the PRUDENCE project [28] in which simulations were carried out with 8 RCMs, 4 GCMs, two emission scenarios and different AOGCM realizations within the same scenario. The future period simulated was the late 21 st century 2071-2100. Deque et al. [29] separated the different uncertainty sources and Fig. 9 summarizes their results by showing the fractional contribution of each source to the overall uncertainty of the PRUDENCE projections for different variables (winter and summer temperature and precipitation) when considering the entire European region. In this case the uncertainty is essentially measured by the spread (i.e. for GCM, RCM, realization, or scenario uncertainty) of the projected changes. In evaluating the results of Fig. 9 it should be stressed that the PRUDENCE experiments covered only about half of the full IPCC emission scenario range and likely under-sampled the AOGCM uncertainty due to the use of a small number of GCMs.
The basic message of Fig. 9 is that, while for temperature the scenario and AOGCM uncertainty sources clearly dominate, for precipitation, especially in summer, the RCM uncertainty contribution is of magnitude comparable to the others. This is because summer precipitation is mostly of convective nature and thus it is strongly driven by local processes, such as boundary layer dynamics, buoyancy generation and land-atmosphere exchanges. Each RCM has different representations of such processes and this results in a relatively large spread of the models even if they use the same lateral boundary forcing fields. Previous work [30] has also shown that using different downscaling techniques, e.g. RCMs and statistical downscaling methods, may lead to significantly different climate change estimates due to assumptions implicit in the statistical models. Another message of Fig. 9 is that the contribution of internal variability to late 21 st century mean climate change is minor, which confirms previous findings. While this is a conclusion consistently found for mean change, it is worth mentioning that Giorgi and Francisco [31] found that for higher order statistics, and specifically interannual variability, the contribution of internal variability becomes more important.

Summary and conclusions
In this paper we discussed the contribution of different sources of uncertainty, scenario, model configuration, model bias, internal model variability and downscaling, on the overall uncertainty range in climate projections from the global to the regional scale. The conclusions from this analysis are can be summarized as follows: 1) For late 21 st century mean climate change projections the greatest sources of uncertainty are associated with emission/concentration scenarios and inter-model (AOGCM) configuration differences. 2) For early 21 st century projections, the scenario uncertainty becomes secondary and the contribution of internal model variability becomes of primary importance.
3) The contribution of internal variability increases when going from the global to the regional scale and it increases for higher order climate statistics. 4) Systematic model biases do not appear to strongly influence the projected changes in the majority of temperature and precipitation regional cases analyzed. 5) In general, uncertainty is greater at the regional than the global scale. 6) The contribution of the different uncertainty sources vary with temporal and spatial scales.
We have placed our discussion within the context of separating two kinds of uncertainty, what we have referred to as Knowledge and Intrinsic uncertainty. Knowledge (or "bad") uncertainty is due to our poor knowledge and model representation of the problem, and should be reduced as much as possible with improved science. Intrinsic (or "good") uncertainty is inherent in the problem and should thus be characterized as fully as possible (and therefore possibly increased) with improved science, in particular concerning the occurrence of low probability, high impact events. In our specific case, model configuration, bias and downscaling uncertainty mostly fall under the Knowledge uncertainty category, while scenario and internal variability uncertainty mostly fall under the Intrinsic uncertainty category.
Because of the Knowledge and Intrinsic uncertainty sources, it is clear that climate prediction cannot be addressed in a deterministic sense, but requires a probabilistic framework in order to quantitatively measure uncertainty. This measure could be the simple spread of outcomes, or climate change variables, which implies a flat climate change PDF, or some more elaborate measure, such as the standard deviation of the climate change PDF. This information is needed in impact assessment work to evaluate the consequences of the different possible climate change outcomes, possibly within the context of a risk analysis framework.
Clearly, the issue of uncertainty is at the heart of the climate change prediction problem and due to its complexity, both conceptual and when applied to specific impact issues, it will remain a central issue within the climate change debate [32]. A full characterization of uncertainty will require large ensembles of model projections, which in turn will necessarily require large international cooperative programs such as CMIP3 [23], ENSEMBLES, or the upcoming CMI'1P5 (http://cmippcmdi.llnl.gov/cmip5/) and CORDEX [33] in which climate change projections are carried out by a large number of models and laboratories worldwide in a coordinated fashion. I acknowledge the modeling groups, the Program for Climate Model Diagnosis and Intercomparison (PCMDI) and the WCRP's Working Group on Coupled Modelling (WGCM) for their roles in making available the WCRP CMIP3 multi-model dataset. Support of this dataset is provided by the Office of Science, U.S. Department of Energy. I also acknowledge the modeling groups participating in the PRUDENCE and ENSEMBLES projects for making available their data and results.