CL
Climate: Past, Present & Future

Uncategorized

God does not play DICE – but Bill Nordhaus does! What can models tell us about the economics of climate change?

Climate change has been described as “the biggest market failure in human history”[1]. Although fuel is costly, emitting the by-product CO2 is for free; yet it causes damages to society. In other words, those who benefit, by using the atmosphere as waste dump, do not pay the full costs, i.e. the adverse effects climate change has on societies on a global scale. Can this market failure be cured? Should humankind sacrifice some of its present welfare to prevent future climate damages? William Nordhaus was jointly awarded the Nobel Prize for economy for providing a framework to answer these questions.

DICE– the Dynamic Integrated model of Climate and the Economy [2] – combines a simple economic model with a simple climate model. The aim is not to fully cover all details of economic and climate processes, but rather to provide a model that is sufficiently simple to be used by non-specialists, including policy makers. Figure 1 shows a simplified structure of the DICE model.

Figure. 1: Schematic illustration of the DICE model. The dark blue arrows correspond to the purely economic component of the model. The yellow and green arrows indicate how the economy impacts climate and vice versa. The light blue arrows illustrate the effect of climate policy.

The economy of the DICE model

The heart of DICE is an economic growth model (dark blue arrows in fig. 1). Economic production occurs when labour and capital is available. Labour is proportional to the world population, which is homogenous and grows according to externally prescribed data. Part of the economic production is invested to create capital for the next time step, while the remaining part is consumed. It is assumed that the “happiness” (called utility in the jargon) of the population depends exclusively on consumption, in a sublinear fashion: The more you consume, the happier you are. However, if you are already rich, then one extra Euro will not increase your happiness as much as when you are poor.

In this purely economic model, the only decision the world population has to take is to determine the saving rate – the fraction of economic production to invest for the next period. If we invest too much, we reduce our current happiness; if we invest too little, we have too little to consume next period. Therefore, the aim is to find an optimal pathway to be reasonably happy now and in the future. However, there is a twist: observations suggest that we, humans, value the present more than the future. E.g. if we are offered 1 Euro either now or next year, we would prefer to be paid now, even in absence of inflation or increasing income. However, if offered 1 Euro now or 1.03 Euro next year, we might begin to prefer the delayed, but larger payment. The extra amount needed to make later payment acceptable is called “Rate of Pure Time Preference”; in our example, it is 3% [3, p.28]. A high Rate of Pure Time Preference basically means that we care much less about future welfare than about the present one. If there is an economic growth (which is the case in DICE), there is an additional reason to prefer being paid now rather than later: In the future, you will be richer, so one additional Euro will mean less to you than now while you are still relatively poor. This effect means that the total “discount rate”, defined as the extra payment needed to make delayed payment attractive, is even higher than the rate of pure time preference [3, chapter 1] .

 

The impact of climate change

To bring climate change into play, Nordhaus assumed that apart from labour and capital, economic production also requires energy. However, energy production causes CO2 emissions. Part of the CO2 ends up in the biosphere or in the ocean, but another part remains in the atmosphere, leading to global warming.

Practically, everyone agrees that substantial warming will have damaging effects on the economy. Although there may not be “good” or “bad” temperatures a priori, ecosystems and human societies are adapted to the current climate conditions, and any (rapid) change away from what we are accustomed to will cause severe stress. For example, there may not be an “ideal” sea level, but strong sea level rise – or fall – will cause severe strain on coastal communities who are adapted to the current level[4].

These damages are extremely hard to quantify. First, we obviously have no reliable empirical data – we simply have not yet experienced the economic damages associated with rapid warming by several degrees. Second, there could be “low chance, high impact” events [5], e.g. events that even under climate change are deemed unlikely to our current knowledge, but would have dramatic consequences if they occur – for example, a collapse of large parts of the Antarctic ice sheet. Third, there are damages, like the loss of a beautiful glacial landscape or the human suffering inflicted by famine, which cannot be quantified in terms of money.

When formulating his Nobel prize-winning DICE model, William Nordhaus tried to solve the first problem by performing an extensive review of the (scarce) existing studies on climate-induced damages and greatly extrapolating the results. E.g. if data was available on reduced wheat production in the Eastern US during a heat wave, Nordhaus might assume that damage for all food crops in Africa is, say, twice as big (as Africa is more dependent on agriculture than the US). This may still be quite ad-hoc, but one might argue that even rough data is better than no data at all. The second and third of the above points where largely circumvented with the “willingness-to-pay” approach [2]: people were asked how much they would pay to prevent the extinction of polar bears or the collapse of the Antarctic ice sheet, for example, and the price they names was used as substitute for damages associated to these events.

Finally, Nordhaus came up with an estimate for climate damage:

D=k1T + k2T2

where D is the damage in % of the GDP, T is the global mean temperature change, and k are constants (k1 = -0.0035 K-1 and k2=+0.0045 K-1) [2, p. 207]. Note that the k1<0 implies that for small T, global warming is actually beneficial. 2.5 degree and 5 degree warming yield damages of 1.1% and 6.5% of the GDP, respectively. Later versions of DICE have k1=0.

 

To reduce global warming, humanity can reduce their carbon emissions. In other words, part of the global economic production is sacrificed to pay for greener energy. This will leave less money to spend on consumption and/or investment in capital, but it also diminishes future climate damages. Therefore, in order to maximise the “happiness”, two control variables must now be chosen at each time step: the saving rate and the emission reduction fraction. We want to reduce carbon emissions enough to avoid very dangerous climate change, but also avoid unnecessary costs.

 

 

Figure. 2 Results of the DICE model. The optimal policy (i.e. maximising “happiness”) in the 2013 version of DICE. The blue lines indicate the optimal policy, while yellow lines indicate no climate policy (i.e. zero emission reduction). The first plot shows the emission reduction fraction or “abatement”, i.e. the fraction of carbon emissions that are prevented. 1 means that no CO2 is emitted. The second plot shows the atmospheric CO2 concentrations in ppmv. For the optimal policy, CO2 concentrations peak at 770ppmv, whereas in absence of a policy, they rise beyond 2000ppmv. The pre-industrial value is 280ppmv. The third plot shows the global mean temperature change. For the optimal policy, it peaks at about 3.2K, i.e. above the limit of 2K or even 1.5K agreed by the Paris agreement.

 

Results and Criticism

The results in fig. 2 show that under the “optimal” policy, i.e. the policy which maximises “happiness”, the Paris agreement will not be met. This result suggests that the costs required for keeping global warming below 1.5 or 2ºC warming are too high compared to the benefit, namely strong reduction in climate damages. However, some researchers
criticise that DICE severely underestimates the risks of climate change. For example, the damage function might be too low and does not explicitly take into account the risk of “low chance, high impact events”. Including such events, occurring at low probability but causing high damages if they occur, will lead to more stringent climate action [6].

The rate of pure time preference has given rise to even fiercer discussions [7,8,9]. As explained above, a society’s discount rate can be estimated from market interest rates [3]. Knowing the economic growth, we can infer the rate of pure time preference used in market decisions. Many economists argue that the rate of pure time preference in models like DICE should be chosen consistent with observations[8]. Nordhaus followed this approach. However, one can argue that even if individuals care less for the future than the present, this does not mean that such an approach is ethically defendable in the context of climate change. Individuals are mortal and may choose to consume their own savings before they die. But climate change is a global and intergenerational problem, and it has been argued [7,9] that we should care for future generations as much as for ourselves. Therefore the rate of pure time preference must be (nearly) zero. Note that this still allows for some discounting, arguing that if future generations are richer, they might be able to deal better with climate change.

Another reason for the relatively weak carbon emission reduction in DICE’s optimal policy may be that it is too pessimistic concerning future costs of emission reduction. For example, DICE does not include the learning-by-doing effect: The more we reduce emissions, the more efficient technologies we discover, and the cheaper it gets. In addition, the costs for green energy are partly one-time investments, e.g. restructuring the energy distribution grids, which are now adapted for few, central energy providers, to a more decentralised structure with smaller providers (e.g. households with solar panels). Once these (large) efforts have been made, the costs for green energy will decrease. But if DICE overestimates the costs of carbon emission reduction, it will be biased towards recommending low reductions.

Due to the above, and many more, issues some researchers criticise that models like DICE are “close to useless”, and even harmful, as they pretend to give precise instructions to policy makers while in fact they struggle with huge uncertainties [10]. In my opinion, models like DICE should not be used for precise policy recommendations like fixing the carbon tax, but are still useful for a somewhat qualitative scenario exploration. For example, it can be fruitful to add “low chance, high impact events” or the learning-by-doing effect and investigate the qualitative effect on the optimal abatement.

Many more economy-climate models have been written in the last decades, some of which are much more sophisticated than DICE. Moreover, there are many models focussing only on specific aspects of the problem, for example, the details of the energy sector. This is still a very active field of research. So, however limited DICE may be, it has laid the foundations for a highly relevant scientific and societal discussion. And even if one should take its precise output with a lump of salt, it is a valuable tool to help policy makers to qualitatively grasp the essence of climate economy.

This post has been edited by the editorial board.

REFERENCES

[1] Nicholas Stern: “The Economics of Climate Change” ( RICHARD T. ELY LECTURE ) http://darp.lse.ac.uk/papersdb/Stern_(AER08).pdf

[2] A thorough description of the model is given by William Nordhaus and Jospeh Boyer, “Warming the World. Economic Models of global warming” (https://eml.berkeley.edu//~saez/course131/Warm-World00.pdf). There are newer model versions available, but the underlying concepts remain the same.

[3] A thorough introduction to discounting is given in this book: Christian Gollier, “Pricing the Future: The economics of discounting and sustainable development” (http://idei.fr/sites/default/files/medias/doc/by/gollier/pricing_future.pdf), especially chapter 1.

[4] see e.g. Wong, P.P., I.J. Losada, J.-P. Gattuso, J. Hinkel, A. Khattabi, K.L. McInnes, Y. Saito, and A. Sallenger, 2014: Coastal systems and low-lying areas. In: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A:
Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/WGIIAR5-Chap5_FINAL.pdf

[5] e.g. Lenton et al. “Tipping elements in the Earth’s climate system”, http://www.pnas.org/content/pnas/105/6/1786.full.pdf

[6] Cai et al., “Risk of multiple interacting tipping points should encourage rapid CO2 emission reduction”, https://www.nature.com/articles/nclimate2964.pdf?origin=ppub
[7] The Stern Review on the Economics of Climate Change (http://webarchive.nationalarchives.gov.uk/20100407172811/http://www.hm-treasury.gov.uk/stern_review_report.htm)

[8] Peter Lilley: “What’s wrong with Stern?” (http://www.thegwpf.org/content/uploads/2012/10/Lilley-Stern_Rebuttal3.pdf)

[9] Frank Ackermann: “Debating Climate Economics: The Stern Review vs. Its Critics” (http://www.ase.tufts.edu/gdae/Pubs/rp/SternDebateReport.pdf)

[10] Robert Pindyck, “The Use and Misuse of Models for Climate Policy”, https://academic.oup.com/reep/article/11/1/100/3066301

 

What can artificial intelligence do for climate science?

What can artificial intelligence do for climate science?

What is machine learning?

Artificial Intelligence, and its subfield of machine learning, is a very trending topic as it plays an increasing role in our daily life. Examples are: translation programs, speech recognition software in mobile phones and automatic completion of search queries. However, what value do these new techniques have for climate science? And how complicated is it to use them?

The idea behind machine learning is simple: a computer is not explicitly programmed to perform a particular task, but rather learns to perform a task based on some input data. There are various ways to do this, and machine learning is usually separated into three different domains: supervised learning, unsupervised learning and reinforcement learning. Reinforcement learning is of less interest to climate science, and will therefore not be touched upon here.

In supervised learning, the computer is provided both with the data and some information about the data: i.e. the data is labeled. This means that each chunk of data (usually called one sample) has a label. This label can be a string (e.g. a piece of text), a number or, in principle, any other kind of data. The data samples could be for example images of animals, and the labels the names of the species. The machine learning program then learns to connect images with labels and, when successfully trained, can correctly label new images of animals that it has not seen yet. This principle idea is sketched in Figure 1.


Figure 1: A schematic of supervised machine learning in climate science.

In a climate context, the “image” might be a rough global representation of for example surface pressure, and the label some local phenomenon like strong rainfall in a small region. This is sketched in Figure 2. Some contemporary machine learning methods can decide which features of a picture are related to its label with very little or no prior information. This is well comparable to certain types of human learning. Imagine being taught how to distinguish between different tree species by being shown several images of trees, each labeled with a tree name. After seeing enough pictures, you will be able to identify the tree species shown in images you had not seen before. How you managed to learn this may not be clear to you, but your brain manages to translate the visual input reaching your retina into information you can use to interpret and categorize successive visual inputs. This is exactly the idea of supervised machine learning: one presents the computer with some data and a description of the data, and then lets the computer figure out how to connect the two.

 

FIgure 2: example of using machine learning for predicting local rainfall.

 

In unsupervised learning, on the other hand, the machine learning program is presented with some data, without any additional information on the data itself (such as labels). The idea is that the program searches autonomously for structure or connections in the data This might be for example certain weather phenomena that usually occur together (e.g. very humid conditions in one place A, and strong rainfall in place B). Another example are “typical” patterns of the surface temperature of the ocean. These temperature patterns look slightly different every day, but with machine learning we can find a small number of “typical” configurations – which then can help in understanding the climate.

How difficult is it to implement machine learning techniques?

Machine learning techniques often sound complicated and forbidding. However, due to the widespread use of many machine learning techniques both in research and in commercial applications, there are many publicly available user-ready implementations. A good example is the popular python library scikit-learn1. With this library, classification or regression models based on a wide range of techniques can be constructed with a few lines of code. It is not necessary to know how the algorithm works exactly. If one has a basic understanding of how to apply and evaluate machine learning models, the methods themselves can to a large extent be treated as black-boxes. One simply uses them as tools to address a specific problem, and checks whether they work.

What can machine learning do for climate science?

By now you are hopefully convinced that machine learning methods are: 1) widely used and 2) quite easy to apply in practice. This however still leaves the most important question open: can we actually use them in climate science? And even more importantly: can they help us in actually understanding the climate system? In most climate science applications, machine learning tools can be seen as engineering tools. Take for example statistical downscaling of precipitation. Machine learning algorithms are trained on rainfall data from reanalyses and in-situ observations, and thus learn how to connect large-scale fields and local precipitation. This “knowledge” can then be applied to low-resolution climate simulations, allowing to get an estimate of the local precipitation values that was not available in the original data. A similar engineering approach is “short-cutting” expensive computations in climate models, for example in the radiation schemes. If trained on a set of calculations performed with a complex radiation scheme, a machine learning algorithm can then provide approximate solutions for new climatic conditions and thus prevent the need to re-run the scheme at every time-step in the real model simulation, making it computationally very effective.

However, next to this engineering approach, there are also ways to use machine learning methods in order to actually gain new understanding. For example, in systematically changing the extent of input data, one can try to find out which part of the data is relevant for a specific task. For example “which parts of the atmosphere provide the information necessary to predict precipitation/windspeeds above a specific city and at a specific height?

As final point, artificial intelligence and machine learning techniques are widely used in research and industry, and will evolve in the future independent of their use in climate research. Therefore, they provide an opportunity of getting new techniques “for free”: an opportunity which can and should be used.

[1] scikit-learn.org

 

This article has been edited by Gabriele Messori and Célia Sapart.

Pollen, more than forests’ story-tellers

Pollen, more than forests’ story-tellers
Name of proxy

Sporomorphs (pollen grains and fern spores)

Type of record

Biostratigraphy and Geochronology markers, Vegetation dynamics

Paleoenvironment

Terrestrial environment

Period of time investigated

Present to 360 million years

How does it work?

The sporomorphs (pollen grains and fern spores) are cells produced by plants involved in the reproduction. They are microscopic (less than a fifth of a millimeter) and contain a molecule called sporopollenin in their cell wall, which is very resistant to degradation. The sporopollenin molecule allows sporomorphs to be preserved in sedimentary archives such as lake sediment or peat deposits.

These reproductive structures appeared during the Paleozoic (570 million years ago) but the first spores looked rather similar and were indistinguishable among species. Later speciation of plants promoted the diversification of the reproductive cells between species and brought the opportunity to relate the fossil sporomorphs found in the sedimentary archives to the parental plant that produced them.

Figure 1. Plant communities are different depending on a wide range of environmental conditions. Above: Andean grasslands (páramo) in Ecuador. Below: Swamp forest in Orinoco Delta (Venezuela).

Plants are immobile organisms, and each species has its own tolerance range to the existing environmental conditions. The occurrence of certain plant communities in a specific environment depends on their different tolerance ranges. For instance, we do not observe today the same plants growing in the tropical rainforests of South America than in the polar tundra (Figure 1). Paleopalynology is the discipline that helps characterizing which plant species have occurred at a specific location during a particular time period. This provides information on the environmental conditions of the studied region. To identify the different species, palynologists have to analyze under the optical microscope the specific features of the sporomorphs’ cell walls. They look at e.g. the presence of spines or air sacs, or the number of apertures that the pollen grain has (Figure 2). These features are specific for each plant, which allows relating the pollen grain found in the sedimentary archive to the plant that produced it at the study location at a particular period of time.

Figure 2. Pollen grains have very different morphologies that allow identification of the plant that produce them. A: Byttneria asterotricha (Sterculiaceae); B: Triplaris americana (Polygonaceae); and C: Calyptranthes nervata (Myrtaceae). Bar scales in the pictures represent 25 micrometers.

What are the key findings made using this proxy?

Paleopalynology has a wide range of applications in geoscience. For instance, the presence of specific sporomorphs has been used as chronological markers to pinpoint several geological periods, especially in the far past biostratigraphy (million years ago)  (Salard-Cheboldaeff 1990).

In palaeoecology (the ecology of past ecosystems), the analyses of fossil sporomorphs help in specifying the dynamics of vegetation communities through time. This type of work started a century ago by Lennart van Post (1916) and provided the opportunity to study plants population and community natural trends within the appropriate temporal frame for long-lived species (i.e. tree species such as pines or oaks can live several centuries). Moreover, it provides a unique empirical evidence of the actual responses of vegetation to disturbances that occurred in the past, e.g. natural hazards, human populations land use and other anthropogenic impacts, or climatic shifts.

For instance, regarding past climates, paleopalynology allowed us to:

i)               understand the independent behavior of the species during glacial cycles (i.e., when a single species responded to changes, but the plant community as a unity did not respond) in forming new plant communities each time (Davis 1981; Williams and Jackson 2007);

ii)             map the re-colonization events and the assemblages formed during the last deglaciation until the vegetation communities we observe today (Giesecke et al. 2017).

In addition, in some characteristic environments, such as mountain regions, the occurrence and disappearance of specific species can allow the estimation of the temperature change with respect to present-day conditions. Another example has been developed in the last decade: the study of organic compounds contained in the sporopollenin of the sporomorphs’ walls. It has been identified as an accurate proxy that registers UV-B rays’ signals (Fraser et al. 2014). As UV-B rays’ are related to solar irradiation trends through time, reconstructing the organic compound variations in the sporomorphs’ walls allows reconstituting past solar irradiation trends in continuous archives such as lake and peat deposits.

This all shows that despite being a tiny structure, pollen grains are the story tellers of how the planet has been changing through history and can provide a wide range of outcomes essential for geosciences.

References

Davis, M.B. (1981). Quaternary history and the stability of forest communities. In: West, D.C., Shugart, H.H., Botkin D.B. (Eds.) Forest succession. New York, NY: Springer-Verlag.

Fraser, W.T., Lomax, B.H., Jardine, P.E., Gosling, W.D., Sephton, M.A. (2014). Pollen and spores as a passive monitor of ultraviolet radiation. Frontiers in Ecology & Evolution 2: 12.

Giesecke, T., Brewer, S., Finsinger, W., Leydet, M., Bradshaw, R.H.W. (2017). Patterns and dynamics of European vegetation change over the last 15,000 years. Journal of Biogeography 44: 1441-1456.

Salard-Cheboldaeff, M. (1990). Interptropical African palynostratigraphy from Cretaceous to late Quaternary times. Journal of African Earth Sciences 11: 1-24.

Von Post, L. (1916). Om skogsträdpollen i sydsvenska torfmosslagerföljder. Geol.   Fören. Stockh. Förhandlingar 38, 384–390.

Williams, J.W., Jackson, S.T. (2007). Novel climates, no-analog communities, and ecologica lsurprises. Frontiers in Ecology and the Environment 5: 475-482.

                                                                                                                           Edited by Célia Sapart and Carole Nehme

How to reconstruct past climates from water stable isotopes in Polar ice cores ?

How to reconstruct past climates from water stable isotopes in Polar ice cores ?

Ice cores are a favored archive to study past climates, because they provide a number of indications on the history of the climate and of the atmospheric composition. Among these, water stable isotopes are considered as a very reliable temperature proxy. Yet, their interpretation is sometimes more complicated than a simple one-to-one correspondence with local temperature and requires intercomparison with other proxy records, as various processes affect the signal found in the ice cores.

How does it work?

All water molecules are not equal: some are heavier due to one of their atoms being substituted with a heavier counterpart (the standard oxygen molecule, 16O, can be substituted by an 17O or an 18O, whereas the hydrogen (H) can be substituted by a deuterium (D=2H)). These molecules (e.g. H216O, H218O and HD16O) are called isotopologues (or isotopes in short, but it’s technically inaccurate), and have each different physical properties. As a result, the molecules react differently to external factors, leading to fractionation (processes that affect the relative abundance of isotopes). The isotopic composition (commonly referred to as δ18O, δ17O and δD) of snow is governed by fractionation from the evaporation site, where the moisture first enters the atmosphere, to the precipitation site where it is deposited to the ground (Dansgaard 1953)(see video below). First, over the ocean, the heavier isotopes are less likely to take part in the formation of moisture, leading to lower concentrations of the heavy isotopes in the clouds compared to the mean oceanic water isotopic composition (lower concentration means that the δ18O is more negative). Then, as the air masses move toward the poles, temperature decreases leading to precipitation (either liquid or solid). The heavier isotopes will be preferentially found in the condensed phase than the light ones, which depletes even more the cloud from its heavy isotopes. Finally, in remote areas of the Polar Regions, the isotopic content of the final precipitation results from successive precipitation events. Since, as we saw, the precipitation contents in each isotope are different, each successive precipitation event will have a different isotopic composition, with the final one having the least heavy isotopes – a process called distillation. At each step of the moisture’s path from ocean to cloud to precipitation, the isotopic fractionation is strongly influenced by temperature. This leads to a temperature signal in the isotopic composition of both vapour and precipitation.

This video shows the isotopic fractionation at each step of the water cycle (from the evaporation over the ocean to the location where the precipitation occurs) are integrated, giving the temperature and humidity sensitivity of the isotopic content of the precipitation (modified from Casado (2016)).

Over glaciers and ice sheets, snow accumulates and can remain preserved for hundred of thousands of years. Thus, analysing the isotopic composition of these successive layers enables us to retrieve past temperatures. Considering the very low amount of water necessary to obtain a measurement, analysing an ice core can provide continuous and high-resolution time series of past climatic variations.

A classic way to retrieve temperature from isotopic composition data is to use the spatial relationship between δ18O of surface snow and surface temperature (e.g. Lorius and Merlivat (1975) for Antarctica). That is, to measure simultaneously the present-day temperature and the δ18O of surface snow across the study area and to make use of their linear spatial relationship to infer past temperatures from the δ18O of the ice core. However, one should keep in mind two main limitations when using such a method. First, it assumes that the spatial relationship between δ18O and temperature is a good surrogate for the temporal δ18O versus temperature relationship. However, this link is known to change with time (and hence depth of the snow deposit). Second, processes occurring after the snow has fallen, such as sublimation or blowing wind, can affect the way the snow is layered in the ice core, as well as its isotopic composition.

Resolution and noise

The local accumulation (expressed in cm of snow per year) is a determining factor for both the extent of an ice core record and the maximal resolution that can be achieved. As the present-day ice thickness is capped between 3 and 4 km, it is necessary to choose a site with low accumulation to obtain an ice record spanning several glacial/interglacial (i.e. warmer and colder) cycles (Fischer et al. 2013). For instance, the NGRIP ice core in Greenland was retrieved in one of the thickest part of the Greenland ice sheet (see Figure 1. b)). Similarly, the Dome C ice core (the ice core spanning the furthest in the past to date which is 800,000 years before present) was retrieved in an area of Antarctica where the ice sheet was very thick (more than 3000 meters) and the accumulation very low (roughly 2.5 cm per year).

Figure 1: Greenland and Antarctic ice core sites: (a) Isotopic signal from the NGRIP, (b) and (c) maps of ice thickness in Greenland and Antarctica, respectively, and (d), isotopic signal from the Dome C. The isotopic signal for both sites is presented against the depth (left) and with the associated age model (right), warm periods are shown in grey to indicate the correspondence between age and depth in the ice cores (Casado et al, 2017).

For sites with low accumulation, the snow stays exposed at the surface for a long time. Hence, the initial precipitation signal is modified by local processes occurring after the snow has fallen (Ekaykin et al. 2002). This prevents proper recording of the signal at timescales below several years, for sites with accumulation lower than 8 cm per year (Münch et al. 2016). Deeper in the ice, diffusion processes smoothen the isotopic composition time series, erasing part of the climatic signal (Johnsen 1977). This limits the interpretation of ice core records at time scales smaller than a few decades. Finally, retrieving a temperature signal at high resolution for longer time scales remains a challenge because of the varying relationship between δ18O and the temperature deeper in the ice. The first limitation is the accumulation rate itself, which is typically lower during glacial periods. The temporal resolution also gets lower with depth as the ice thins due to the increasing pressure exert by the overlying ice layers. As illustrated in Fig. 1, the number of years per meter globally increases with the depth of the record, from roughly 20 years per meter at the top of the core at Dome C up to 1,400 years per meter for glacial periods at the bottom. Overall, the variability found in single ice core records combines both the climate variability and several signatures from other local processes affecting the snow.

Isotope-temperature calibration

The temperature signal retrieved from δ18O can be tested against independent temperature time series, such as borehole temperature measurements at the ice core site, to aid in reconstructing the correct δ18O versus temperature relationship (the so–called “calibration” process). The temperature of the borehole from which the ice core was extracted is measured at different depths. Small variations in these temperatures provide a reliable but low-resolution measurement of past temperature changes as the ice is a good thermal insulator. Measurements performed in Greenland have suggested that the use of the spatial δ18O versus temperature relationship described above underestimates by a factor of two the magnitude of the temperature change between the last glacial maximum (LGM, the last period when ice sheets were at their peak extension) and the present-day (Cuffey et al. 1994). Jouzel et al. (2003) confirmed this using computer simulations, and further showed that the δ18O versus temperature relationship does not remain constant over time. This large variability can be due to differences in the large-scale atmospheric circulation, vertical structure of the atmosphere, seasonality of precipitation, modification of location of the moisture source regions or of their climatic conditions.

Figure 2: Relationships between isotopes and temperature for different locations and timescales (indicated along the horizontal axis). A higher value would lead to higher temperature difference estimate for the same δ18O difference (Casado et al, 2017).

Calibration of the isotopic paleothermometer is therefore essential, and is realised through different methods. Station data can be used at the seasonal and interannual scales, isotopes of other gases at decadal to centennial scales (Guillevic et al. 2013) and borehole temperature measurements at millennial scales (Orsi et al. 2017). Finally, climate models which represent isotopic processes (called “isotope-enabled”) can be used to infer the isotope-temperature relationship with a direct control on the time scale and on the period. For instance, Sime et al. (2009) highlighted that for warm interglacial conditions, the isotope-temperature relationship can become non-linear whereas it is not the case for cooler (glacial) conditions.

The δ18O versus temperature relationships found in the literature (Fig. 2) span values ranging between 0.2 ‰/°C to 1.5 ‰/°C. From this compilation, it is clear that a more complex framework than simple linear regression to temperature is necessary to interpret the isotopic signal.

Conclusions

If water isotopes from ice core records are insightful tools to reconstruct past climates, there are fundamental limits to their power of reconstruction.

The above therefore calls for a careful use of isotopic records when these time-series are used for general inferences about the climate system (e.g. Huybers and Curry (2006)). A possible way forward is to use isotope-enabled global climate models (Sime et al. 2009). A complementary approach is to undertake process field studies (Casado et al. 2016), which can help to evaluate how the isotopic signal is modified after the deposition, and how the relationship between isotopes and temperature is altered at the seasonal and inter annual timescales.

The Beyond EPICA – Oldest Ice project plans to retrieve an ice core in Antarctica in which over 1.5 million years of climatic record could be retrieved. This will enable to go further back than the 800, 000 years old ice core obtained at Dome C and thus, would be a breakthrough into studying the changes in orbital forcing during the mid-Pleistocene transition (900 to 1,200 thousand years ago) during which the glacial-interglacial cycles shifted from lasting 41, 000 years on average to 100, 000 years.