CL
Climate: Past, Present & Future

Guest

This guest post was contributed by a scientist, student or a professional in the Earth, planetary or space sciences. The EGU blogs welcome guest contributions, so if you've got a great idea for a post or fancy trying your hand at science communication, please contact the blog editor or the EGU Communications Officer to pitch your idea.

God does not play DICE – but Bill Nordhaus does! What can models tell us about the economics of climate change?

Climate change has been described as “the biggest market failure in human history”[1]. Although fuel is costly, emitting the by-product CO2 is for free; yet it causes damages to society. In other words, those who benefit, by using the atmosphere as waste dump, do not pay the full costs, i.e. the adverse effects climate change has on societies on a global scale. Can this market failure be cured? Should humankind sacrifice some of its present welfare to prevent future climate damages? William Nordhaus was jointly awarded the Nobel Prize for economy for providing a framework to answer these questions.

DICE– the Dynamic Integrated model of Climate and the Economy [2] – combines a simple economic model with a simple climate model. The aim is not to fully cover all details of economic and climate processes, but rather to provide a model that is sufficiently simple to be used by non-specialists, including policy makers. Figure 1 shows a simplified structure of the DICE model.

Figure. 1: Schematic illustration of the DICE model. The dark blue arrows correspond to the purely economic component of the model. The yellow and green arrows indicate how the economy impacts climate and vice versa. The light blue arrows illustrate the effect of climate policy.

The economy of the DICE model

The heart of DICE is an economic growth model (dark blue arrows in fig. 1). Economic production occurs when labour and capital is available. Labour is proportional to the world population, which is homogenous and grows according to externally prescribed data. Part of the economic production is invested to create capital for the next time step, while the remaining part is consumed. It is assumed that the “happiness” (called utility in the jargon) of the population depends exclusively on consumption, in a sublinear fashion: The more you consume, the happier you are. However, if you are already rich, then one extra Euro will not increase your happiness as much as when you are poor.

In this purely economic model, the only decision the world population has to take is to determine the saving rate – the fraction of economic production to invest for the next period. If we invest too much, we reduce our current happiness; if we invest too little, we have too little to consume next period. Therefore, the aim is to find an optimal pathway to be reasonably happy now and in the future. However, there is a twist: observations suggest that we, humans, value the present more than the future. E.g. if we are offered 1 Euro either now or next year, we would prefer to be paid now, even in absence of inflation or increasing income. However, if offered 1 Euro now or 1.03 Euro next year, we might begin to prefer the delayed, but larger payment. The extra amount needed to make later payment acceptable is called “Rate of Pure Time Preference”; in our example, it is 3% [3, p.28]. A high Rate of Pure Time Preference basically means that we care much less about future welfare than about the present one. If there is an economic growth (which is the case in DICE), there is an additional reason to prefer being paid now rather than later: In the future, you will be richer, so one additional Euro will mean less to you than now while you are still relatively poor. This effect means that the total “discount rate”, defined as the extra payment needed to make delayed payment attractive, is even higher than the rate of pure time preference [3, chapter 1] .

 

The impact of climate change

To bring climate change into play, Nordhaus assumed that apart from labour and capital, economic production also requires energy. However, energy production causes CO2 emissions. Part of the CO2 ends up in the biosphere or in the ocean, but another part remains in the atmosphere, leading to global warming.

Practically, everyone agrees that substantial warming will have damaging effects on the economy. Although there may not be “good” or “bad” temperatures a priori, ecosystems and human societies are adapted to the current climate conditions, and any (rapid) change away from what we are accustomed to will cause severe stress. For example, there may not be an “ideal” sea level, but strong sea level rise – or fall – will cause severe strain on coastal communities who are adapted to the current level[4].

These damages are extremely hard to quantify. First, we obviously have no reliable empirical data – we simply have not yet experienced the economic damages associated with rapid warming by several degrees. Second, there could be “low chance, high impact” events [5], e.g. events that even under climate change are deemed unlikely to our current knowledge, but would have dramatic consequences if they occur – for example, a collapse of large parts of the Antarctic ice sheet. Third, there are damages, like the loss of a beautiful glacial landscape or the human suffering inflicted by famine, which cannot be quantified in terms of money.

When formulating his Nobel prize-winning DICE model, William Nordhaus tried to solve the first problem by performing an extensive review of the (scarce) existing studies on climate-induced damages and greatly extrapolating the results. E.g. if data was available on reduced wheat production in the Eastern US during a heat wave, Nordhaus might assume that damage for all food crops in Africa is, say, twice as big (as Africa is more dependent on agriculture than the US). This may still be quite ad-hoc, but one might argue that even rough data is better than no data at all. The second and third of the above points where largely circumvented with the “willingness-to-pay” approach [2]: people were asked how much they would pay to prevent the extinction of polar bears or the collapse of the Antarctic ice sheet, for example, and the price they names was used as substitute for damages associated to these events.

Finally, Nordhaus came up with an estimate for climate damage:

D=k1T + k2T2

where D is the damage in % of the GDP, T is the global mean temperature change, and k are constants (k1 = -0.0035 K-1 and k2=+0.0045 K-1) [2, p. 207]. Note that the k1<0 implies that for small T, global warming is actually beneficial. 2.5 degree and 5 degree warming yield damages of 1.1% and 6.5% of the GDP, respectively. Later versions of DICE have k1=0.

 

To reduce global warming, humanity can reduce their carbon emissions. In other words, part of the global economic production is sacrificed to pay for greener energy. This will leave less money to spend on consumption and/or investment in capital, but it also diminishes future climate damages. Therefore, in order to maximise the “happiness”, two control variables must now be chosen at each time step: the saving rate and the emission reduction fraction. We want to reduce carbon emissions enough to avoid very dangerous climate change, but also avoid unnecessary costs.

 

 

Figure. 2 Results of the DICE model. The optimal policy (i.e. maximising “happiness”) in the 2013 version of DICE. The blue lines indicate the optimal policy, while yellow lines indicate no climate policy (i.e. zero emission reduction). The first plot shows the emission reduction fraction or “abatement”, i.e. the fraction of carbon emissions that are prevented. 1 means that no CO2 is emitted. The second plot shows the atmospheric CO2 concentrations in ppmv. For the optimal policy, CO2 concentrations peak at 770ppmv, whereas in absence of a policy, they rise beyond 2000ppmv. The pre-industrial value is 280ppmv. The third plot shows the global mean temperature change. For the optimal policy, it peaks at about 3.2K, i.e. above the limit of 2K or even 1.5K agreed by the Paris agreement.

 

Results and Criticism

The results in fig. 2 show that under the “optimal” policy, i.e. the policy which maximises “happiness”, the Paris agreement will not be met. This result suggests that the costs required for keeping global warming below 1.5 or 2ºC warming are too high compared to the benefit, namely strong reduction in climate damages. However, some researchers
criticise that DICE severely underestimates the risks of climate change. For example, the damage function might be too low and does not explicitly take into account the risk of “low chance, high impact events”. Including such events, occurring at low probability but causing high damages if they occur, will lead to more stringent climate action [6].

The rate of pure time preference has given rise to even fiercer discussions [7,8,9]. As explained above, a society’s discount rate can be estimated from market interest rates [3]. Knowing the economic growth, we can infer the rate of pure time preference used in market decisions. Many economists argue that the rate of pure time preference in models like DICE should be chosen consistent with observations[8]. Nordhaus followed this approach. However, one can argue that even if individuals care less for the future than the present, this does not mean that such an approach is ethically defendable in the context of climate change. Individuals are mortal and may choose to consume their own savings before they die. But climate change is a global and intergenerational problem, and it has been argued [7,9] that we should care for future generations as much as for ourselves. Therefore the rate of pure time preference must be (nearly) zero. Note that this still allows for some discounting, arguing that if future generations are richer, they might be able to deal better with climate change.

Another reason for the relatively weak carbon emission reduction in DICE’s optimal policy may be that it is too pessimistic concerning future costs of emission reduction. For example, DICE does not include the learning-by-doing effect: The more we reduce emissions, the more efficient technologies we discover, and the cheaper it gets. In addition, the costs for green energy are partly one-time investments, e.g. restructuring the energy distribution grids, which are now adapted for few, central energy providers, to a more decentralised structure with smaller providers (e.g. households with solar panels). Once these (large) efforts have been made, the costs for green energy will decrease. But if DICE overestimates the costs of carbon emission reduction, it will be biased towards recommending low reductions.

Due to the above, and many more, issues some researchers criticise that models like DICE are “close to useless”, and even harmful, as they pretend to give precise instructions to policy makers while in fact they struggle with huge uncertainties [10]. In my opinion, models like DICE should not be used for precise policy recommendations like fixing the carbon tax, but are still useful for a somewhat qualitative scenario exploration. For example, it can be fruitful to add “low chance, high impact events” or the learning-by-doing effect and investigate the qualitative effect on the optimal abatement.

Many more economy-climate models have been written in the last decades, some of which are much more sophisticated than DICE. Moreover, there are many models focussing only on specific aspects of the problem, for example, the details of the energy sector. This is still a very active field of research. So, however limited DICE may be, it has laid the foundations for a highly relevant scientific and societal discussion. And even if one should take its precise output with a lump of salt, it is a valuable tool to help policy makers to qualitatively grasp the essence of climate economy.

This post has been edited by the editorial board.

REFERENCES

[1] Nicholas Stern: “The Economics of Climate Change” ( RICHARD T. ELY LECTURE ) http://darp.lse.ac.uk/papersdb/Stern_(AER08).pdf

[2] A thorough description of the model is given by William Nordhaus and Jospeh Boyer, “Warming the World. Economic Models of global warming” (https://eml.berkeley.edu//~saez/course131/Warm-World00.pdf). There are newer model versions available, but the underlying concepts remain the same.

[3] A thorough introduction to discounting is given in this book: Christian Gollier, “Pricing the Future: The economics of discounting and sustainable development” (http://idei.fr/sites/default/files/medias/doc/by/gollier/pricing_future.pdf), especially chapter 1.

[4] see e.g. Wong, P.P., I.J. Losada, J.-P. Gattuso, J. Hinkel, A. Khattabi, K.L. McInnes, Y. Saito, and A. Sallenger, 2014: Coastal systems and low-lying areas. In: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A:
Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/WGIIAR5-Chap5_FINAL.pdf

[5] e.g. Lenton et al. “Tipping elements in the Earth’s climate system”, http://www.pnas.org/content/pnas/105/6/1786.full.pdf

[6] Cai et al., “Risk of multiple interacting tipping points should encourage rapid CO2 emission reduction”, https://www.nature.com/articles/nclimate2964.pdf?origin=ppub
[7] The Stern Review on the Economics of Climate Change (http://webarchive.nationalarchives.gov.uk/20100407172811/http://www.hm-treasury.gov.uk/stern_review_report.htm)

[8] Peter Lilley: “What’s wrong with Stern?” (http://www.thegwpf.org/content/uploads/2012/10/Lilley-Stern_Rebuttal3.pdf)

[9] Frank Ackermann: “Debating Climate Economics: The Stern Review vs. Its Critics” (http://www.ase.tufts.edu/gdae/Pubs/rp/SternDebateReport.pdf)

[10] Robert Pindyck, “The Use and Misuse of Models for Climate Policy”, https://academic.oup.com/reep/article/11/1/100/3066301

 

What can artificial intelligence do for climate science?

What can artificial intelligence do for climate science?

What is machine learning?

Artificial Intelligence, and its subfield of machine learning, is a very trending topic as it plays an increasing role in our daily life. Examples are: translation programs, speech recognition software in mobile phones and automatic completion of search queries. However, what value do these new techniques have for climate science? And how complicated is it to use them?

The idea behind machine learning is simple: a computer is not explicitly programmed to perform a particular task, but rather learns to perform a task based on some input data. There are various ways to do this, and machine learning is usually separated into three different domains: supervised learning, unsupervised learning and reinforcement learning. Reinforcement learning is of less interest to climate science, and will therefore not be touched upon here.

In supervised learning, the computer is provided both with the data and some information about the data: i.e. the data is labeled. This means that each chunk of data (usually called one sample) has a label. This label can be a string (e.g. a piece of text), a number or, in principle, any other kind of data. The data samples could be for example images of animals, and the labels the names of the species. The machine learning program then learns to connect images with labels and, when successfully trained, can correctly label new images of animals that it has not seen yet. This principle idea is sketched in Figure 1.


Figure 1: A schematic of supervised machine learning in climate science.

In a climate context, the “image” might be a rough global representation of for example surface pressure, and the label some local phenomenon like strong rainfall in a small region. This is sketched in Figure 2. Some contemporary machine learning methods can decide which features of a picture are related to its label with very little or no prior information. This is well comparable to certain types of human learning. Imagine being taught how to distinguish between different tree species by being shown several images of trees, each labeled with a tree name. After seeing enough pictures, you will be able to identify the tree species shown in images you had not seen before. How you managed to learn this may not be clear to you, but your brain manages to translate the visual input reaching your retina into information you can use to interpret and categorize successive visual inputs. This is exactly the idea of supervised machine learning: one presents the computer with some data and a description of the data, and then lets the computer figure out how to connect the two.

 

FIgure 2: example of using machine learning for predicting local rainfall.

 

In unsupervised learning, on the other hand, the machine learning program is presented with some data, without any additional information on the data itself (such as labels). The idea is that the program searches autonomously for structure or connections in the data This might be for example certain weather phenomena that usually occur together (e.g. very humid conditions in one place A, and strong rainfall in place B). Another example are “typical” patterns of the surface temperature of the ocean. These temperature patterns look slightly different every day, but with machine learning we can find a small number of “typical” configurations – which then can help in understanding the climate.

How difficult is it to implement machine learning techniques?

Machine learning techniques often sound complicated and forbidding. However, due to the widespread use of many machine learning techniques both in research and in commercial applications, there are many publicly available user-ready implementations. A good example is the popular python library scikit-learn1. With this library, classification or regression models based on a wide range of techniques can be constructed with a few lines of code. It is not necessary to know how the algorithm works exactly. If one has a basic understanding of how to apply and evaluate machine learning models, the methods themselves can to a large extent be treated as black-boxes. One simply uses them as tools to address a specific problem, and checks whether they work.

What can machine learning do for climate science?

By now you are hopefully convinced that machine learning methods are: 1) widely used and 2) quite easy to apply in practice. This however still leaves the most important question open: can we actually use them in climate science? And even more importantly: can they help us in actually understanding the climate system? In most climate science applications, machine learning tools can be seen as engineering tools. Take for example statistical downscaling of precipitation. Machine learning algorithms are trained on rainfall data from reanalyses and in-situ observations, and thus learn how to connect large-scale fields and local precipitation. This “knowledge” can then be applied to low-resolution climate simulations, allowing to get an estimate of the local precipitation values that was not available in the original data. A similar engineering approach is “short-cutting” expensive computations in climate models, for example in the radiation schemes. If trained on a set of calculations performed with a complex radiation scheme, a machine learning algorithm can then provide approximate solutions for new climatic conditions and thus prevent the need to re-run the scheme at every time-step in the real model simulation, making it computationally very effective.

However, next to this engineering approach, there are also ways to use machine learning methods in order to actually gain new understanding. For example, in systematically changing the extent of input data, one can try to find out which part of the data is relevant for a specific task. For example “which parts of the atmosphere provide the information necessary to predict precipitation/windspeeds above a specific city and at a specific height?

As final point, artificial intelligence and machine learning techniques are widely used in research and industry, and will evolve in the future independent of their use in climate research. Therefore, they provide an opportunity of getting new techniques “for free”: an opportunity which can and should be used.

[1] scikit-learn.org

 

This article has been edited by Gabriele Messori and Célia Sapart.

How glowing sediment can help to decipher the Earth’s past climate !

How glowing sediment can help to decipher the Earth’s past climate !

The last 2.5 Million years of the Earth’s history (termed Quaternary) are characterised by climatic cycles oscillating between warm (interglacial) and cold (glacial) periods. To be able to fully understand and interpret past climate variations the development of accurate and precise chronological techniques is crucial. Optically stimulated luminescence (OSL) dating is a strong geochronological tool that can be used to date across a wide time range, from the modern days to a few hundred thousand years ago. It has been used to date sediments in nearly all parts of the world. The event that is being dated is the last time the sediment has been exposed to daylight, which means that the luminescence age is directly related to the time of sediment deposition.

How does OSL work?

OSL dating is based on the ability of minerals to store energy (Preusser et al., 2008). Luckily quartz and feldspar, the two most common minerals in the Earth’s crust have this ability. They work like small batteries, which get charged when the sediment is buried (Fig. 1 C, Duller, 2008). This is due to radiation from naturally occurring radioactive material (uranium, thorium and potassium) in the surrounding sediment, and from cosmic rays for samples closer to the surface. Like a battery, the quartz and feldspar grains have a finite capacity for storing energy. Once completely charged, the battery-like grain is considered as being saturated. This upper age limit of OSL dating depends on the ability of the grain to store the energy and on the rate at which the grain is charged (i.e. the dose rate, derived from uranium, thorium, potassium and cosmic radiation). If the surrounding material is very radioactive, the dose rate is very high, which means that the grain saturates rather quickly (Fig. 1C), but if the dose rate is low, the battery-like grain charges more slowly and OSL can be used to date geomorphological processes further back in time. Exposure to natural sunlight will remove charge from the grain. This bleaching or resetting occurs e.g. during sediment transport and deposition (Fig. 1A, D). Subsequently, the sediment can be buried again and the grain will get charged (Fig. 1D) until it is saturated, or until the sediment is transported and re-deposited, or is sampled (Fig. 1F). Once sampled in opaque tubes (Fig. 2 A), so that no daylight can affect the amount of energy stored in the grains, the sample is ready to be processed in the laboratory.

Figure 1: . Accumulation and release of charge within mineral grains (modified from Duller, 2008). (A) During transport of sediment the accumulated charge gets released due to energy provided by natural sunlight; (B) At deposition, the charge is fully released and the “battery” is emptied; (C) Charge can accumulate within the grain during burial due to natural ionising radiation; (D) and (E) The process described before can happen multiple times over geological times scales, depending on the environment and the geological processes; (F) When a sample is taken in opaque tubes or using cores the stored energy at that particular time gets sampled and can later be released in the laboratory to obtain a luminescence age.

What happens in the laboratory?

Sample preparation involves multiple steps to isolate the right minerals and grain size for dating. A key step is the decision on the mineral that will be used for dating. Whilst quartz gets reset more quickly when exposed to sunlight, feldspar has the potential of dating events further back in time.

The energy stored in the mineral grains cannot only be released by natural sunlight, but also in the laboratory using a defined wavelength under controlled conditions (Fig. 2B). During this process, the grain emits light, which is collected. This emitted light gives information on the amount of stored energy (Duller et al., 2008; Preusser et al., 2008). When comparing this light output, from a natural radiation dose received, to a light output generated by a known laboratory dose, the so-called laboratory equivalent dose is obtained. This equivalent dose (in Gy), divided by the natural dose rate (in Gy/1,000 years) will give the OSL age in thousand years (Duller et al., 2008; Preusser et al., 2008). OSL dating can be done using different grain sizes of sediment, either mounted as patches of grains on aluminium or steel discs (Fig. 2E) or as single grains, brushed into very small holes on a disc (Fig. 2C, D). As an alternative to grains, slices of rock can be used (e.g. Sohbati et al., 2011; Jenkins et al., 2018). Important is a representative number of sub-samples, which will be analysed using statistical means to get a valid age (Galbraith et al., 1999; Galbraith and Roberts, 2012).

Figure 2: (A) Sampling of sediment in an opaque tube for OSL dating; (B) Luminescence instruments (here Risø Readers) used to date the samples. The picture also shows the photographic read light conditions under which samples are prepared and measured to avoid resetting of the battery-like grains; (C) Sample carousel with single grain discs; (D) Close-up of a single grain discs containing grains in 100 holes (300 µm in diameter); (E) Steel discs containing fine silt size material (4-11 µm). Credit photos: (A) H. M. Roberts, (B)-(D) S. Riedesel, (E) A. M. Zander.

Where can and has OSL dating been applied? –Some examples from past climate research

Luminescence dating can be a valuable geochronological tool in very different climatic parts of the Earth: the terrestrial systems with loess and lake records for example, the glacial land system, with a focus on ice marginal archives and the deep marine archives with long sedimentary records.

Terrestrial archives – loess and lakes

Loess is silt-size (4-63 μm) sediment transported by wind (aeolian transport), which has been exposed to sufficient daylight to fully reset the stored luminescence signal. This makes it favourable for luminescence dating. The Chinese Loess Plateau (CLP) is the Earth’s most important terrestrial climate archive. Changes in the accumulation of loess and/or the occurrence of soil horizons within the loess sequences, give information on changes in past climate (e.g. temperature, wind direction and intensity, precipitation). For a long time it has been considered as a continuous past climate archive (Liu and Ding, 1998). High-resolution OSL dating at different sites across the CLP gave new insights. It showed that the loess record is neither homogenous nor continuous (Stevens et al., 2007; Stevens et al., 2018). Unconformities could be detected and related to erosional processes, disturbances or diagenetic modifications (Roberts et al., 2001; Stevens et al., 2007; Buylaert et al., 2008). The application of OSL dating to loess has also helped to gain knowledge on e.g. variations in wind directions in the past (e.g. the East Asian Monsoon behaviour; Stevens et al., 2006; Kang et al., 2018).

Lake sediments also provide long records of past climate changes. Lamb et al. (2018) established a chronology based on a combination of OSL and radiocarbon dates for the past 150,000 years of Lake Tana in Ethiopia. This chronology helped to infer time spans of favourable climatic conditions for early human migrations. Another example is the late Quaternary chronology of the Xingkai Lake in northeast Asia by Long et al. (2015). It spans the past 130,000 years and shows how important independent age control is when performing geochronological research. The combination of OSL and radiocarbon ages highlights the potential of OSL to date events beyond the age range of radiocarbon (ca. 45,000 years, Walker, 2005).

Remnants of former lakes can also be used as archives, e.g. the beach ridges, marking former shorelines of palaeo-lake systems in the present day Kalahari Desert, bear witness to a wetter climate in the past (Burrough et al., 2009). OSL has been the key tool to establish a 280,000 year chronology of these palaeo-lake high-stands, giving insights into late Quaternary changes between arid and humid phases in southern Africa (Burrough et al., 2009).

The deep sea – Potential and challenges of these long records using OSL

In marine sediments, OSL dating can be used in addition to radiocarbon dating, or to date beyond the range of the latter (Stokes et al., 2003; Olley et al., 2004, Armitage and Pinder, 2017), or where insufficient biogenic carbonate is available. As radiocarbon dating of material in marine sediments can suffer from a reservoir age induced by old carbon in marine water, OSL dating can be a useful alternative (Olley et al., 2004). However, OSL dating of marine sediments can be challenging, since transport and deposition of the sediment under water complicates the bleaching of the sediment grains (Olley et al., 2004, Armitage and Pinder, 2017). Nevertheless, successful OSL research has been conducted in deep-sea environments. For example, Sugisaki et al. (2010) were able to establish an OSL-chronology of sediments from the Okhotsk Sea, including the last glacial-interglacial transition by covering a time span from 140,000 to 15,000 years.

Glacial land forms – Ice marginal features: What can they tell us about glacial advance and retreat?

OSL dating can be used in the cryosphere, e.g. to date relics of past glaciations. Smedley et al. (2016) used single grains of feldspar to date glacial advances during the last glacial maximum (LGM) in Patagonia. While their results for glacial advances at the onset of and during the LGM correlate with other studies in South America, the final glacial advance in their study area at ~15,000 years is later than elsewhere, which may hint towards local topographic and regional climatic factors (precipitation), controlling glacial responses, or preservation issues elsewhere.

The recent development of dating cobbles in glacial contexts not only enables the age determination of glacial features, such as glaciofluvial sediments, it also gives further insights into the transport history of the cobbles prior to their burial (Freiesleben et al., 2015; Jenkins et al., 2018). Cobbles can record the deposition event in a similar way to the one described above for much finer grained sediment. The great advantage of cobbles is their potential to record multiple exposure events (Freiesleben et al., 2015). In the case of a cobble from the Isle of Man, it potentially records the advance and retreat of the Irish Sea Ice Stream (Jenkins et al., 2018).

The examples presented here show the wide applicability of OSL dating to directly date the last exposure of sediment to daylight in various environmental settings. OSL techniques are able to date geological events beyond the age range of radiocarbon and recent developments improve the reliability of OSL dating in geomorphological settings, where resetting of the stored charge might otherwise be challenging.

This post has been edited by the editorial board.

References

Armitage, S. J., Pinder, R. C., 2017. Testing the applicability of optically stimulated luminescence dating to Ocean Drilling Program cores. Quaternary Geochronology 39, 124-130.

Burrough, S. L., Thomas, D. S. G., Bailey, R. M., 2009. Mega-Lake in the Kalahari: A Late Pleistocene record of the Palaeolake Makgadikgadi system. Quaternary Science Reviews 28, 1392-1411.

Buylaert, J.-P., Murray, A. S., Vandenberghe, D., Vriend, M., De Corte, F., Van den haute, P., 2008. Optical dating of Chinese loess using sand-sized quartz: Establishing a time frame for Late Pleistocene climate changes in the western part of the Chinese Loess Plateau. Quaternary Geochronology 3, 99-113.

Duller, G. A. T., 2008. Luminescence dating – Guidelines on using luminescence dating in archaeology. English Heritage, 44p.

Freiesleben, T., Sohbati, R., Murray, A. S., Jain, M., al Khasawneh, S., Hvidt, Jakobsen, B., 2015. Mathematical model quatifies multiple daylight exposure and burial events for rock surfaces using luminescence dating. Radiation Measurements 81, 16-22.

Galbraith, R. F., Roberts, R. G., 2012. Statistical aspects of equivalent dose and error calculation and display in OSL dating: An overview and some recommendations. Quaternary Geochronology 11, 1-27.

Galbraith, R. F., Roberts, R. G., Laslett, G. M., Yoshida, H., Olley, J. M., 1999. Optical dating of single and multiple grains of quartz from Jinmium Rock Shelter, Northern Australia: Part 1, experimental design and statistical models. Archaeometry 41 (2), 339-364.

Jenkins, G. T. H., Duller, G. A. T., Roberts, H. M., Chiverrell, R. C., Glasser, N. F., 2018. A new approach for luminescence dating glaciofluvial deposits – High precision optical dating of cobbles. Quaternary Science Reviews 192, 263-273.

Kang, S., Wang, X., Roberts, H. M., Duller, G. A. T., Cheng, P., Lu, Y., An, Z., 2018. Late Holocene anti-phase change in the East Asian summer and winter monsoons. Quaternary Science Reviews 188, 28-36.

Lamb, H. F., Bates, R., Bryant, C. L., Davies, S. J., Huws, D. G., Marshall, M. H., Roberts, H. M., 2018. 150,000-year palaeoclimate record from northern Ethiopia supports early, multiple dispersals of modern humans from Africa. Scientific Reports 8:1077.

Liu, T., Ding, Z., 1998. Chinese loess and the paleomonsoon: Annual Review of Earth and Planetary Science 26, 111-145.

Long, H., Shen, J., Wang, Y., Gao, L., Frechen, M., 2015. High-resoltuion OSL dating of a late Quaternary sequence from Xingkai Lake (NE Asia): Chronological challenge of the “MIS 3a Mega-paleolake” hypothesis in China. Earth and Planetary Science Letters 428, 281-292.

Olley, J. M., De Deckker, P., Roberts, R. G., Fifield, L K., Yoshida, H., Hancock, G., 2004. Optical dating of deep-sea sediments using single grains of quartz: a comparison with radiocarbon. Sedimentary Geology 169, 175-189.

Preusser, F., Degering, D., Fuchs, M., Hilgers, A., Kadereit, A., Klasen, N., Krbetschek, M., Richter, D., Spencer, J. Q. G., 2008. Luminescence dating: basics, methods and applications. Eiszeitalter und Gegenwart – Quaternary Science Journal 57 (1-2), 95-149.

Roberts, H. M., Wintle, A., Maher, B. A., Hu, M., 2001.Holocene sediment-accumulation rates in the western Loess Plateau, China, and a 2500-year record of agricultural activity, revealed by OSL dating. The Holocene 11 (4), 477-483.

Smedley, R. K., Glasser, N. F., Duller, G. A. T., 2016. Luminescence dating of glacial advances at Lago Buenos Aires (~46° S), Patagonia. Quaternary Science Reviews 134, 59-73.

Sohbati, R., Murray, A. S., Jain, M., Buylaert, J.-P., Thomsen, K. J., 2011. Investigating the resetting of OSL signals in rock surfaces. Geochronometria 38 (3), 249-258.

Stevens, T., Buylaert, J.-P., Thiel, C., Újvári, G., Yi, S., Murray, A. S., Frechen, M., Lu, H., 2018. Ice-volume-forced erosion of the Chinese Loess Plateau global Quaternary stratotype site. Nature Communications 9:983.

Stevens, T., Thomas, D. S. G., Armitage, S. J., Lunn, H. R., Lu, H., 2007. Reinterpreting climate proxy records from late Quaternary Chinese loess: A detailed OSL investigation. Earth-Science Reviews 80, 111-136.

Stevens, T., Armitage, S., Lu, H., Thomas, D. S. G., 2006. Sedimentation and diagenesis of Chinese loess: Implications for the preservation of continuous, high-resolution climate records.

Stokes, S., Ingram, S., Aitken, M. J., Sirocko, F., Anderson, R., Leuschner, D., 2003. Alternative chronologies for Late Quaternary (Last Interglacial-Holocene) deep sea sediments via optical dating of silt-sized quartz. Quaternary Science Reviews 22, 925-941.

Sugisaki, S., Buylaert, J.-P., Murray, A. S., Tsukamoto, S., Nogi, Y., Miura, H., Sakai, S., Iijima, K., Sakamoto, T., 2010. High resolution OSL dating back to MIS 5e in the central Sea of Okhotsk. Quaternary Geochronology 5, 293-298.

Walker, M., 2005. Quaternary Dating Methods. John Wiley and Sons, Chicchester, United Kingdom, 307 p.

What can the Cretaceous tell us about our climate?

What can the Cretaceous tell us about our climate?

The Cretaceous

The Cretaceous period features a particularly interesting climatic episode in the Earth’s geological history. It follows the Jurassic Period, better known as the time the dinosaurs inhabited Earth and spanned the period between 145.5 and 65.5 million years ago. The Cretaceous is the last period of the Mesozoic Era, which ends with a well-known mass extinction event. At the end of the Cretaceous, an asteroid hit the Earth in the Yucatan Peninsula, Mexico, forming what is today called the Chicxulub impact crater (2). It has been estimated that half of the world’s species became extinct around this time, but no accurate species count exists for each group of organisms. Figure 1 represents an artistic representation of how we imagine the Cretaceous landscape looked like.The supercontinent Pangea, which already started to rift apart in the preceding period, continued to diverge. By the mid-Cretaceous, Pangea split into several smaller continents and ocean basins (continental configuration illustrated in Figure 2), such as the Pacific Ocean, the proto-Atlantic and the Tethys Sea. The spreading of continents also generated extensive new coastlines resulting in increasing near-shore habitats. Moreover, seasons became more pronounced as the global climate cooled. By the end of the Cretaceous, primordial woods evolved to become more similar to those distributed on Earth today (2).

Figure 2: Much of today’s dry land – most of Europe, the midwest of the USA and Northern Africa- was underwater, due to high sea levels during the Cretaceous. The proto-Atlantic Ocean grew much wider as North and South America rifted apart from Africa and Europe. The Indian continent was still an island drifting northward to encounter the Asian continent. (Source: http://www.scotese.com/cretaceo.htm, visited 14.7.2018)

A stable and warm climate

Another intriguing aspect of the Cretaceous period is the warm and stable climate, with tropical and polar temperatures higher than today, lower gradient from the Equator to the Poles, as well as from the land to the ocean and fewer seasonal extremes. Rainfall and atmospheric greenhouse gas concentrations (e.g. CO2, CH4) were higher in the Cretaceous compared to today explaining partly the relatively warmer climate at the global scale. High temperatures extended into the polar regions have prevented the accumulation of ice sheets and reduced the temperature gradient between the Equator and the Poles. This in turn disrupted the mid- and high latitude wind systems, which affected global temperature distribution and the wind-driven ocean circulation (2,5).

The lack of continuous winds at mid-latitudes prevented ocean currents from forming and transporting heat from the Equator to higher latitudes. In today’s ocean, the Gulf Stream would be an example of such a current, able to transport heat from the Caribbean area all the way to Europe, making winter months in Europe warmer. During the Cretaceous this heat was transported via small to large scale swirls capable to trap heat and transport it to higher latitudes. The dominating “thermohaline circulation”, i.e. the current global ocean circulation, would not develop until the continental configuration was similar to todays.

The Cretaceous Ocean and its chemical state

The chemical state of the Cretaceous ocean was also extremely different. Today, oceans are dominated by oxic (oxygenated) water masses, with the exception of some marginal basins like the Black Sea, fjords, upwelling areas and so-called coastal “anoxic dead zones”. However, during the Jurassic and early Cretaceous episodes local anoxia occurred and this state developed into a global scale phenomenon during the mid-Cretaceous. These Oceanic Anoxic Events (OAEs) represented severe disturbances to the global carbon, oxygen and nutrient cycles of the ocean. One well-known event is the OAE2, which happened during the mid-Cretaceous (120-80 Ma), shown in Figure 3 together with several other OAEs recorded in a sediment core drilled in Northern Germany. The OAE2 was characterized by extreme atmospheric CO2 concentrations, widespread water anoxia and free hydrogen sulfide in the surface ocean (3). Evidence for these global perturbations is recorded in ocean sediments, thus providing an important insight into the Earth’s climate archive. Sediments from this period attributed to OAE2 are known globally for their thick layers of organic-rich black shales. The enhanced accumulation of organic carbon during these extreme events did not only have a significant impact on Cretaceous ocean chemistry and climate, but also played an important role in the formation of oil and gas, over millions of years, which are now extensively exploited as a fossil energy source (4).

The Cretaceous Black Shale

The resulting elevated levels of carbon burial would account for the Cretaceous Black Shale Formation in the ocean basins and has often been related to increased organic matter preservation due to anoxic conditions, increased oceanic productivity in the surface ocean or a combination of both. However, the exact nature and background of the paleo-environment that fostered this massive and almost global deposition of organic carbon-rich sediments is still a matter of debate. Two mechanisms are commonly proposed to explain the prolonged oceanic oxygen depletion during OAEs: first, a decreased oxygen supply to the deep ocean due to weaker ocean circulation and second, an increased oxygen demand in the water column resulting from enhanced primary productivity (1,4).

In the latter mechanism, the high primary production was caused by the increased nutrient availability in the ocean in combination with high temperatures. This has been shown by following the evolution of the nutrient concentration in sediments from the onset of an OAE to the end of it: Nutrients (phosphorous in this case) accumulate in the sediments prior to an OAE, but their concentrations decrease during an anoxic event. This implies that the nutrients were consumed by primary producers (4). As primary producers die and sink as organic matter through the low oxygen concentration water column, one part of the organic matter is degraded, while the other part reaches the ocean floor, where it is buried layer upon layer, creating thick sediment bands with high concentrations of organic matter. These layers show up as dark laminated layer in sediment cores (Figure 3).

Figure 3: The history of Earth can be read from this sediment core. Several transitions between an oxic and anoxic states are visible in this core section (meters 44 to 38 of the Wunstorf core), whereby the black shales represented by the dark laminated layers. This sediment core was taken on land in Northern Germany during the time when this region was under water (mid Cretaceous).
(Source: https://www.researchgate.net/publication/49595125/download, visited 14.7.2018)

These black shales strata offer a unique opportunity to study the evolution of biological, chemical and physical processes in sediments. It allows us to investigate the relation between sediment accumulation and the degradation of organic matter over the geological times. This can bring key information on the evolution of carbon sources and sinks and their possible climate feedback on different time scales.

This post has been edited by the editorial board.

References
  1. Arndt, Sandra, Hans-Jürgen Brumsack, and Kai W. Wirtz. "Cretaceous black shales as active bioreactors: a biogeochemical model for the deep biosphere encountered during ODP Leg 207 (Demerara Rise)." Geochimica et Cosmochimica Acta 70.2 (2006): 408-425.
  2. Harff, Jan, et al. Encyclopedia of Marine Geosciences. Springer, 2016.
  3. Hay, William W. "Evolving ideas about the Cretaceous climate and ocean circulation." Cretaceous Research 29.5-6 (2008): 725-753.
  4. Hülse, Dominik, et al. "Understanding the causes and consequences of past marine carbon cycling variability through models." Earth-science reviews 171 (2017): 349-382.
  5. Weissert, Helmut. "Mesozoic pelagic sediments: archives for ocean and climate history during green-house conditions." Developments in sedimentology. Vol. 63. Elsevier, (2011): 765-792.