CL
Climate: Past, Present & Future

Climate of the Present

Habits in numerical model construction

Habits in numerical model construction

 

Numerical models are omnipresent in climate research. Constructed to understand the past, to forecast future climate and to gain new knowledge on natural processes and interactions, they enable the simulation of experiments at otherwise unreachable time and spatial scales. These instruments have long been considered to be fed – let even determined – by either theories or observations alone. But are they really? Sociological factors are at play too. It is precisely these influences that the present blog entry attempts at presenting through the review of cornerstone sociological studies and new anthropological insights on decision-making of model builders.

 

What we here define as numerical models are computer simulations of processes occurring within a system. Global Circulation Models (GCMs), the climate models used to simulate the Earth’s climate and its response to changes of greenhouse gas concentrations in the atmosphere, are certainly the most widely known representatives of this category of scientific instruments. However, numerical models of all types abound in research fields associated to the study of climate change. Some require to be run on national supercomputing centres, while others run on a laptop. Some might be the product of entire research teams, while others result from individual endeavour. Some simulate climate processes at a global scale, while others focus on gas bubbles in a cubic meter of peat. The term “numerical model” employed hereafter embraces this diversity.

Models have long been considered by philosophers to be logically deduced from theory and observations alone. In the late 1990s, Morgan and Morrison (1999) initiated a new turn by insisting on the partial autonomy of models from theory and observations, arising from the diversity of their ingredients. It is this partial autonomy, the authors claim, which enables us to gain knowledge through models about theories and the world. Simultaneously, several studies of the domain of Science and Technology Studies (STS) attempted to address the actual activity of modelling through immersion within climate research institutes, hereby unveiling complex interactions between actors, institutions and stake-holders.

 

Why does it matter?

Model outputs are widely used as a basis for decision-making, at local up to international level. In a context of political and public defiance, much attention has been devoted to increasing the credibility of numerical models. Model intercomparison projects, uncertainty assessments and “best practice” guidelines have flourished. Yet improving modelling activity also necessarily goes through understanding and analysing current practices. Empirical studies addressing actual practices are a requisite to improve reflexivity – and thereby control – over implicit mechanisms likely to impact model outputs. The question ought hence to be how models are actually constructed, before addressing how they should be.

 

What we already know…

Current practices can be analysed at different levels, from individual to institutional. Most empirical sociological studies have so far investigated the interface between decision-making spheres and modelling, as well as institutional and disciplinary cultures of modelling. Shackley (1999) notably identified different “epistemic lifestyles” within global circulation modelling, which appeared to be influenced among others by the role, location, objectives and funding of the organisation within which the modelling project was conducted. He further highlighted with fellow researchers the influence of the modeller’s own perception of the policy process on his or her modelling practices (Shackley et al., 1999). Similar assertions have been made in neighbouring fields (such as hydrology, land use change and integrative assessment modelling), but rarely based on empirical material.

 

 … and what we know less

 The majority of existing studies considered the epistemic stance of modellers – their perception of the modelling activity, of its objectives and of the modellers’ own roles, as well as the modellers’ positioning with respect to particular issues encountered in climate modelling. However, how modellers really make choices during the construction of their model – and which factors influence these – has barely been examined. This is exactly what we aimed to scrutinize through our study presented at the EGU General assembly 2018.

 

Decisions in model construction

Studying decision-making within the process of model construction implies to assume that choices have to be made. As straightforward as this claim might seem, the very existence of choices has been largely absent from philosophical reflexion upon modelling during the 20th century and remains yet to be granted appropriate attention. Modellers however make, sometimes in an iterative manner, a plethora of choices during the model building activity. The temporal and spatial scales need to be selected and along with them the natural processes at play, their interactions, their representation through physical equations or parameterization, their numerical implementation, the source of data, the hardware and software at use, etc. (Babel et al., 2019).

The following video summarizes the rationale behind our research and the approach we used.

Choosing how to represent a natural process

We decided to focus on one particular type of choice: the representation of natural processes through equations (time transfer functions) and their numerical implementation. Even when modellers have selected to simulate a particular natural process (evapotranspiration, for example), several representations of one and the same process can generally be contemplated (Guillemot, 2010). We then asked ourselves on which basis modellers choose one representation and not another. We expected mostly technical aspects to come forefront, such as the required data or software and hardware limitations. These indeed take a prominent place in specialized literature.

 

Interviewing modellers

We adopted a well-established methodology from social sciences based on semi-directed interviews, which were conducted with researchers who developed a model from its earliest stage on. The interviewees were not aware of the exact subject of the interview. Prior to the interviews, we identified in the literature accompanying the presentation of the models one or several processes for which no justification was given on the reasons leading to the choice of the employed representation. After introductive, general questions on the modelling project, the researchers were invited to explain the use of this particular representation.

All interviews were recorded and transcribed. The interviewees came from five universities or research institutes located in four different countries in Europe and Northern America. All but one were senior scientists. A diversity was sought among the types of models (from highly complex ones to models openly described as simplistic) and scientific disciplines, ranging from ecology to geochemistry. With the exception of astrophysics, which we included in order to test a hypothesis not detailed in the present blog entry, all models were devoted to research questions associated to climate change. A total of 14 interviews were conducted.

 

The role of actors – or what we did not expect

As stated above, we expected the modellers to justify the choice of representations with mostly technical constraints. They did not. Rather, the narratives granted particular emphasis to actors – colleagues, professors, PhD directors – who belonged to the modeller’s network during the construction of his or her model. Many of the interviewees had started building their model (which they nowadays continue developing) as doctorate students. The use of a certain process representation was often explained as having been transferred by the (PhD) research director or colleagues. Two decades later, the representation was still part of the model – and modelling practices of these actors played a paramount role in the modellers’ justification of its use, even in competitive and controversy-laden contexts (Babel et al., 2019).

 

From transfer to habit

Many of the interviewees were surprised to be asked about an equation or a numerical scheme they did not perceive to be a distinctive, novel feature of their model. Even if other alternatives to the process representation existed in all the analysed cases, they did not necessarily consider to have made a choice. The choice had often been made by others at the very beginning of their career and transferred to them by their PhD directors or colleagues. They incorporated it in their own practices, a process one of the interviewees described as a “natural evolution”.

(…) during my PhD, my PhD director was only working with [this process representation]. And so I was educated with it. And so I couldn’t imagine doing something else (…) And so after my thesis, I naturally evolved with this approach because it was what I knew. It was a natural evolution, it is as… yes, when we can speak a language, we evolve with this language. So here it is a bit the same, I knew how to speak [this process representation] and so I naturally kept on evolving with this approach. But it is true that… yes, it is the main reason, I believe “  (interview quoted in Babel et al., 2019).

The natural evolution this interviewee referred to can be equated to a path dependence. The modeller developed skills and expertise through the repeated use of the representation, which rendered its implementation increasingly evident in the course of his career. We employed the sociological concept of habit, notably analysed by Latour (2013) to describe the progressive incorporation of choices becoming self-evident practices.

Figure 1. Illustration of the transfer of natural processes representations and incorporation within modelling practices.

 

Habits are required – but self-reflexivity over them too

While the term has often a negative overtone in everyday language, habits can be considered as deeply necessary. As stated by Latour (2013), these smooth out the course of actions: a modeller who would constantly re-consider, on a daily basis, the use of a programming language, a database or a certain variable would lose herself in perpetual decision-making requiring both attention and time. By repeating actions without engaging on new paths – by evolving with the same language, as the interviewee quoted above put it – we gain in efficiency and expertise. Yet, a danger is looming: that of falling into automatisms, losing sight and control over the initial crossroad and hence the ability to reverse, whenever necessary, our paths of actions. Questioning modelling habits and tracking them back to their roots – both on an individual and a collective basis – appears an unavoidable step to gain a better understanding over existing modelling practices.

 

Collectives may reinforce path dependence

The modellers interviewed during our study displayed striking consciousness of their process representation being often particular to a certain collective (a “school” or “field”) they nowadays identified with. By transferring them with a process representation, their director or colleague had also anchored them within a network: that of scientists using the same representation. This anchoring, which was often unconscious at the beginning of their career, could act as reinforcing the path dependence. Changing of process representation would not only often necessitate considerable effort and time to reach the same level of efficiency and expertise gained over the years, but also imply to turn away from a collective within which the modellers had established themselves (Babel et al., 2019).

 

A word of caution

Our study does not describe modellers as being determined as habits. Rather, we aimed at shedding light on inter-individual and collective influences within the modelling process often disregarded in field-specific literature. We assume habits to play a role among other factors. The fact that these other (computational, cost-related) factors were rarely mentioned by the modeller during the interviews could be explained by them being perceived as evident or self-speaking; additional studies would be required to explore the intertwinement of other triggers of model decision with inter-individual and collective influences.

Finally, this study was based on a limited number of interviews: we did not seek for exhaustivity or generalizations, but for case studies enabling a first glance on rarely studied processes.

This post has been edited by Janina Bösken and Carole Nehme.

REFERENCES
Babel, L., Vinck, D., Karssenberg, D. (2019). Decision-making in model construction: unveiling habits. Environmental Modelling and Software, 120, in press. DOI: https://doi.org/10.1016/j.envsoft.2019.07.015

Guillemot, H. (2010). Connections between simulations and observation in climate computer modeling. Scientist’s practices and “bottom-up epistemology” lessons. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys., Special Issue: Modelling and Simulation in the Atmospheric and Climate Sciences 41, 242–252.

Latour, B. (2013). An Inquiry into Modes of Existence. An Anthropology of the Moderns. Harvard University Press, Cambridge, Massachusetts.

Morgan, M.S., Morrison, M. (1999). Models as Mediators: Perspective On Natural and Social Science. Cambridge University Press, Cambridge.

Shackley, S. (2001). Epistemic Lifestyles in Climate Change Modelling, in: Edwards, P.N. (Ed.), Changing the Atmosphere: Expert Knowledge and Environmental Governance. MIT Press, Cambridge, Massachusetts.

Shackley, S., Risbey, J., Stone, P., Wynne, B. (1999). Adjusting to Policy Expectations in Climate Change Modeling. Clim. Change 43, 413–454.

 



			

		

God does not play DICE – but Bill Nordhaus does! What can models tell us about the economics of climate change?

Climate change has been described as “the biggest market failure in human history”[1]. Although fuel is costly, emitting the by-product CO2 is for free; yet it causes damages to society. In other words, those who benefit, by using the atmosphere as waste dump, do not pay the full costs, i.e. the adverse effects climate change has on societies on a global scale. Can this market failure be cured? Should humankind sacrifice some of its present welfare to prevent future climate damages? William Nordhaus was jointly awarded the Nobel Prize for economy for providing a framework to answer these questions.

DICE– the Dynamic Integrated model of Climate and the Economy [2] – combines a simple economic model with a simple climate model. The aim is not to fully cover all details of economic and climate processes, but rather to provide a model that is sufficiently simple to be used by non-specialists, including policy makers. Figure 1 shows a simplified structure of the DICE model.

Figure. 1: Schematic illustration of the DICE model. The dark blue arrows correspond to the purely economic component of the model. The yellow and green arrows indicate how the economy impacts climate and vice versa. The light blue arrows illustrate the effect of climate policy.

The economy of the DICE model

The heart of DICE is an economic growth model (dark blue arrows in fig. 1). Economic production occurs when labour and capital is available. Labour is proportional to the world population, which is homogenous and grows according to externally prescribed data. Part of the economic production is invested to create capital for the next time step, while the remaining part is consumed. It is assumed that the “happiness” (called utility in the jargon) of the population depends exclusively on consumption, in a sublinear fashion: The more you consume, the happier you are. However, if you are already rich, then one extra Euro will not increase your happiness as much as when you are poor.

In this purely economic model, the only decision the world population has to take is to determine the saving rate – the fraction of economic production to invest for the next period. If we invest too much, we reduce our current happiness; if we invest too little, we have too little to consume next period. Therefore, the aim is to find an optimal pathway to be reasonably happy now and in the future. However, there is a twist: observations suggest that we, humans, value the present more than the future. E.g. if we are offered 1 Euro either now or next year, we would prefer to be paid now, even in absence of inflation or increasing income. However, if offered 1 Euro now or 1.03 Euro next year, we might begin to prefer the delayed, but larger payment. The extra amount needed to make later payment acceptable is called “Rate of Pure Time Preference”; in our example, it is 3% [3, p.28]. A high Rate of Pure Time Preference basically means that we care much less about future welfare than about the present one. If there is an economic growth (which is the case in DICE), there is an additional reason to prefer being paid now rather than later: In the future, you will be richer, so one additional Euro will mean less to you than now while you are still relatively poor. This effect means that the total “discount rate”, defined as the extra payment needed to make delayed payment attractive, is even higher than the rate of pure time preference [3, chapter 1] .

 

The impact of climate change

To bring climate change into play, Nordhaus assumed that apart from labour and capital, economic production also requires energy. However, energy production causes CO2 emissions. Part of the CO2 ends up in the biosphere or in the ocean, but another part remains in the atmosphere, leading to global warming.

Practically, everyone agrees that substantial warming will have damaging effects on the economy. Although there may not be “good” or “bad” temperatures a priori, ecosystems and human societies are adapted to the current climate conditions, and any (rapid) change away from what we are accustomed to will cause severe stress. For example, there may not be an “ideal” sea level, but strong sea level rise – or fall – will cause severe strain on coastal communities who are adapted to the current level[4].

These damages are extremely hard to quantify. First, we obviously have no reliable empirical data – we simply have not yet experienced the economic damages associated with rapid warming by several degrees. Second, there could be “low chance, high impact” events [5], e.g. events that even under climate change are deemed unlikely to our current knowledge, but would have dramatic consequences if they occur – for example, a collapse of large parts of the Antarctic ice sheet. Third, there are damages, like the loss of a beautiful glacial landscape or the human suffering inflicted by famine, which cannot be quantified in terms of money.

When formulating his Nobel prize-winning DICE model, William Nordhaus tried to solve the first problem by performing an extensive review of the (scarce) existing studies on climate-induced damages and greatly extrapolating the results. E.g. if data was available on reduced wheat production in the Eastern US during a heat wave, Nordhaus might assume that damage for all food crops in Africa is, say, twice as big (as Africa is more dependent on agriculture than the US). This may still be quite ad-hoc, but one might argue that even rough data is better than no data at all. The second and third of the above points where largely circumvented with the “willingness-to-pay” approach [2]: people were asked how much they would pay to prevent the extinction of polar bears or the collapse of the Antarctic ice sheet, for example, and the price they names was used as substitute for damages associated to these events.

Finally, Nordhaus came up with an estimate for climate damage:

D=k1T + k2T2

where D is the damage in % of the GDP, T is the global mean temperature change, and k are constants (k1 = -0.0035 K-1 and k2=+0.0045 K-1) [2, p. 207]. Note that the k1<0 implies that for small T, global warming is actually beneficial. 2.5 degree and 5 degree warming yield damages of 1.1% and 6.5% of the GDP, respectively. Later versions of DICE have k1=0.

 

To reduce global warming, humanity can reduce their carbon emissions. In other words, part of the global economic production is sacrificed to pay for greener energy. This will leave less money to spend on consumption and/or investment in capital, but it also diminishes future climate damages. Therefore, in order to maximise the “happiness”, two control variables must now be chosen at each time step: the saving rate and the emission reduction fraction. We want to reduce carbon emissions enough to avoid very dangerous climate change, but also avoid unnecessary costs.

 

 

Figure. 2 Results of the DICE model. The optimal policy (i.e. maximising “happiness”) in the 2013 version of DICE. The blue lines indicate the optimal policy, while yellow lines indicate no climate policy (i.e. zero emission reduction). The first plot shows the emission reduction fraction or “abatement”, i.e. the fraction of carbon emissions that are prevented. 1 means that no CO2 is emitted. The second plot shows the atmospheric CO2 concentrations in ppmv. For the optimal policy, CO2 concentrations peak at 770ppmv, whereas in absence of a policy, they rise beyond 2000ppmv. The pre-industrial value is 280ppmv. The third plot shows the global mean temperature change. For the optimal policy, it peaks at about 3.2K, i.e. above the limit of 2K or even 1.5K agreed by the Paris agreement.

 

Results and Criticism

The results in fig. 2 show that under the “optimal” policy, i.e. the policy which maximises “happiness”, the Paris agreement will not be met. This result suggests that the costs required for keeping global warming below 1.5 or 2ºC warming are too high compared to the benefit, namely strong reduction in climate damages. However, some researchers
criticise that DICE severely underestimates the risks of climate change. For example, the damage function might be too low and does not explicitly take into account the risk of “low chance, high impact events”. Including such events, occurring at low probability but causing high damages if they occur, will lead to more stringent climate action [6].

The rate of pure time preference has given rise to even fiercer discussions [7,8,9]. As explained above, a society’s discount rate can be estimated from market interest rates [3]. Knowing the economic growth, we can infer the rate of pure time preference used in market decisions. Many economists argue that the rate of pure time preference in models like DICE should be chosen consistent with observations[8]. Nordhaus followed this approach. However, one can argue that even if individuals care less for the future than the present, this does not mean that such an approach is ethically defendable in the context of climate change. Individuals are mortal and may choose to consume their own savings before they die. But climate change is a global and intergenerational problem, and it has been argued [7,9] that we should care for future generations as much as for ourselves. Therefore the rate of pure time preference must be (nearly) zero. Note that this still allows for some discounting, arguing that if future generations are richer, they might be able to deal better with climate change.

Another reason for the relatively weak carbon emission reduction in DICE’s optimal policy may be that it is too pessimistic concerning future costs of emission reduction. For example, DICE does not include the learning-by-doing effect: The more we reduce emissions, the more efficient technologies we discover, and the cheaper it gets. In addition, the costs for green energy are partly one-time investments, e.g. restructuring the energy distribution grids, which are now adapted for few, central energy providers, to a more decentralised structure with smaller providers (e.g. households with solar panels). Once these (large) efforts have been made, the costs for green energy will decrease. But if DICE overestimates the costs of carbon emission reduction, it will be biased towards recommending low reductions.

Due to the above, and many more, issues some researchers criticise that models like DICE are “close to useless”, and even harmful, as they pretend to give precise instructions to policy makers while in fact they struggle with huge uncertainties [10]. In my opinion, models like DICE should not be used for precise policy recommendations like fixing the carbon tax, but are still useful for a somewhat qualitative scenario exploration. For example, it can be fruitful to add “low chance, high impact events” or the learning-by-doing effect and investigate the qualitative effect on the optimal abatement.

Many more economy-climate models have been written in the last decades, some of which are much more sophisticated than DICE. Moreover, there are many models focussing only on specific aspects of the problem, for example, the details of the energy sector. This is still a very active field of research. So, however limited DICE may be, it has laid the foundations for a highly relevant scientific and societal discussion. And even if one should take its precise output with a lump of salt, it is a valuable tool to help policy makers to qualitatively grasp the essence of climate economy.

This post has been edited by the editorial board.

REFERENCES

[1] Nicholas Stern: “The Economics of Climate Change” ( RICHARD T. ELY LECTURE ) http://darp.lse.ac.uk/papersdb/Stern_(AER08).pdf

[2] A thorough description of the model is given by William Nordhaus and Jospeh Boyer, “Warming the World. Economic Models of global warming” (https://eml.berkeley.edu//~saez/course131/Warm-World00.pdf). There are newer model versions available, but the underlying concepts remain the same.

[3] A thorough introduction to discounting is given in this book: Christian Gollier, “Pricing the Future: The economics of discounting and sustainable development” (http://idei.fr/sites/default/files/medias/doc/by/gollier/pricing_future.pdf), especially chapter 1.

[4] see e.g. Wong, P.P., I.J. Losada, J.-P. Gattuso, J. Hinkel, A. Khattabi, K.L. McInnes, Y. Saito, and A. Sallenger, 2014: Coastal systems and low-lying areas. In: Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A:
Global and Sectoral Aspects. Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change https://www.ipcc.ch/pdf/assessment-report/ar5/wg2/WGIIAR5-Chap5_FINAL.pdf

[5] e.g. Lenton et al. “Tipping elements in the Earth’s climate system”, http://www.pnas.org/content/pnas/105/6/1786.full.pdf

[6] Cai et al., “Risk of multiple interacting tipping points should encourage rapid CO2 emission reduction”, https://www.nature.com/articles/nclimate2964.pdf?origin=ppub
[7] The Stern Review on the Economics of Climate Change (http://webarchive.nationalarchives.gov.uk/20100407172811/http://www.hm-treasury.gov.uk/stern_review_report.htm)

[8] Peter Lilley: “What’s wrong with Stern?” (http://www.thegwpf.org/content/uploads/2012/10/Lilley-Stern_Rebuttal3.pdf)

[9] Frank Ackermann: “Debating Climate Economics: The Stern Review vs. Its Critics” (http://www.ase.tufts.edu/gdae/Pubs/rp/SternDebateReport.pdf)

[10] Robert Pindyck, “The Use and Misuse of Models for Climate Policy”, https://academic.oup.com/reep/article/11/1/100/3066301

 

What can artificial intelligence do for climate science?

What can artificial intelligence do for climate science?

What is machine learning?

Artificial Intelligence, and its subfield of machine learning, is a very trending topic as it plays an increasing role in our daily life. Examples are: translation programs, speech recognition software in mobile phones and automatic completion of search queries. However, what value do these new techniques have for climate science? And how complicated is it to use them?

The idea behind machine learning is simple: a computer is not explicitly programmed to perform a particular task, but rather learns to perform a task based on some input data. There are various ways to do this, and machine learning is usually separated into three different domains: supervised learning, unsupervised learning and reinforcement learning. Reinforcement learning is of less interest to climate science, and will therefore not be touched upon here.

In supervised learning, the computer is provided both with the data and some information about the data: i.e. the data is labeled. This means that each chunk of data (usually called one sample) has a label. This label can be a string (e.g. a piece of text), a number or, in principle, any other kind of data. The data samples could be for example images of animals, and the labels the names of the species. The machine learning program then learns to connect images with labels and, when successfully trained, can correctly label new images of animals that it has not seen yet. This principle idea is sketched in Figure 1.


Figure 1: A schematic of supervised machine learning in climate science.

In a climate context, the “image” might be a rough global representation of for example surface pressure, and the label some local phenomenon like strong rainfall in a small region. This is sketched in Figure 2. Some contemporary machine learning methods can decide which features of a picture are related to its label with very little or no prior information. This is well comparable to certain types of human learning. Imagine being taught how to distinguish between different tree species by being shown several images of trees, each labeled with a tree name. After seeing enough pictures, you will be able to identify the tree species shown in images you had not seen before. How you managed to learn this may not be clear to you, but your brain manages to translate the visual input reaching your retina into information you can use to interpret and categorize successive visual inputs. This is exactly the idea of supervised machine learning: one presents the computer with some data and a description of the data, and then lets the computer figure out how to connect the two.

 

FIgure 2: example of using machine learning for predicting local rainfall.

 

In unsupervised learning, on the other hand, the machine learning program is presented with some data, without any additional information on the data itself (such as labels). The idea is that the program searches autonomously for structure or connections in the data This might be for example certain weather phenomena that usually occur together (e.g. very humid conditions in one place A, and strong rainfall in place B). Another example are “typical” patterns of the surface temperature of the ocean. These temperature patterns look slightly different every day, but with machine learning we can find a small number of “typical” configurations – which then can help in understanding the climate.

How difficult is it to implement machine learning techniques?

Machine learning techniques often sound complicated and forbidding. However, due to the widespread use of many machine learning techniques both in research and in commercial applications, there are many publicly available user-ready implementations. A good example is the popular python library scikit-learn1. With this library, classification or regression models based on a wide range of techniques can be constructed with a few lines of code. It is not necessary to know how the algorithm works exactly. If one has a basic understanding of how to apply and evaluate machine learning models, the methods themselves can to a large extent be treated as black-boxes. One simply uses them as tools to address a specific problem, and checks whether they work.

What can machine learning do for climate science?

By now you are hopefully convinced that machine learning methods are: 1) widely used and 2) quite easy to apply in practice. This however still leaves the most important question open: can we actually use them in climate science? And even more importantly: can they help us in actually understanding the climate system? In most climate science applications, machine learning tools can be seen as engineering tools. Take for example statistical downscaling of precipitation. Machine learning algorithms are trained on rainfall data from reanalyses and in-situ observations, and thus learn how to connect large-scale fields and local precipitation. This “knowledge” can then be applied to low-resolution climate simulations, allowing to get an estimate of the local precipitation values that was not available in the original data. A similar engineering approach is “short-cutting” expensive computations in climate models, for example in the radiation schemes. If trained on a set of calculations performed with a complex radiation scheme, a machine learning algorithm can then provide approximate solutions for new climatic conditions and thus prevent the need to re-run the scheme at every time-step in the real model simulation, making it computationally very effective.

However, next to this engineering approach, there are also ways to use machine learning methods in order to actually gain new understanding. For example, in systematically changing the extent of input data, one can try to find out which part of the data is relevant for a specific task. For example “which parts of the atmosphere provide the information necessary to predict precipitation/windspeeds above a specific city and at a specific height?

As final point, artificial intelligence and machine learning techniques are widely used in research and industry, and will evolve in the future independent of their use in climate research. Therefore, they provide an opportunity of getting new techniques “for free”: an opportunity which can and should be used.

[1] scikit-learn.org

 

This article has been edited by Gabriele Messori and Célia Sapart.

What is in the (European) air?

What is in the (European) air?

You thought that Mauna Loa was the only observatory to provide continuous measurements of atmospheric carbon dioxide concentration and were disappointed because Hawaii is way too far from your study area or because you wanted to know how bad  the air is in your hometown? The US have been monitoring the composition of the atmosphere since 1972, but what about Europe? Since 2008, Europe has its own measurement network that is managed by a research infrastructure called ICOS (Integrated Carbon Observation System).

Context

Since the beginning of the industrial era (around 1750), atmospheric concentrations of greenhouse gases such as carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) have increased, mostly because of human activities. As a consequence, the climate is getting warmer, which could have dramatic impacts on our daily life. The evolution of the atmospheric composition should therefore be closely monitored.

To improve our understanding of the climate system and achieve good climate predictions, high-precision measurements of greenhouse gas sources and sinks are needed. A large amount of datasets already exists, but the problem is that these data are often too difficult to access, too scattered, not consistent or not reliable.

ICOS main objectives

This is why the main goal of ICOS is to provide scientists, citizens and decision makers with harmonized and high-quality measurements of greenhouse gases in Europe. But the scope of ICOS mission is wider because these data can further be used to:

  • quantify greenhouse gas budgets
  • improve climate predictions
  • check how well/badly European countries are doing in reducing their greenhouse gas emissions
  • adapt policies

ICOS also encompasses an educational dimension by training young scientists through summer schools, workshops and conferences and by spreading knowledge about the carbon cycle to the general public.

Network

ICOS is subdivided in national networks managed by research institutes. Twelve countries are currently members of ICOS: Belgium, Czech Republic, Denmark, Finland, France, Germany, Italy, Netherlands, Norway, Sweden, United Kingdom and Switzerland. The regional dynamics of greenhouse gases is monitored thanks to a network of 126 measurement stations implemented across these countries. Among these stations, 71 are ecosystem stations, 34 are atmospheric stations and 21 are ocean stations (Figure 1). ICOS grows rapidly and 8 other countries are expected to become members soon: Poland, Ireland, Estonia, Portugal, Spain, Hungary, Greece and South Africa.

To be part of the ICOS standardized network, candidate sites have to follow strict specifications regarding equipment, measurement protocols and data processing in order to ensure a homogeneous dataset. Periodic measurements are also carried out across the network with independent instruments to limit systematic errors. Moreover, ICOS is planning to render its data products compatible with outputs from other international measurement networks by taking part in an intercomparison program.

Atmosphere stations (Figure 2)

Atmospheric CO2, CO and CH4 concentrations are continuously measured in atmosphere stations, together with a range of usual meteorological variables such as air temperature, atmospheric pressure, relative humidity, wind direction and speed.

Figure 2: Cabauw atmosphere station in the Netherlands (ICOS ERIC, https://meta.icos-cp.eu/labeling/)

Ecosystem stations (Figure 3)

Flux towers measure the exchange of water vapour, greenhouse gases and energy between the different types of ecosystems and the atmosphere. The list of variables collected at ecosystem stations is available here.

Figure 3: Brasschaat ecosystem station in Belgium (ICOS ERIC, http://www.icos-belgium.be/inf_ecosystem.html)

Ocean stations (Figure 4)

Ocean stations include ships, fixed buoys and flux towers. Carbon fluxes are measured at the ocean-atmosphere interface together with other marine variables such as pH, temperature, or salinity. You can have a look at the exhaustive list of measurements here.

Figure 4: VLIZ data buoy ocean station (ICOS ERIC, https://www.icos-ri.eu/sites/default/files/2017-07/ICOS_Belgium_Media_Kit_EN_0.pdf)

Data products

Data collected by national network stations are gathered, processed and stored by central facilities called Thematic Centers (TC): the Atmosphere Thematic Center (ATC), the Ocean Thematic Center (OTC) and the Ecosystem Thematic Center (ETC).

You can access all these precious data for free here on the carbon portal. Among many examples, you can find ecosystem fluxes time series, atmospheric methane observations or global carbon budget. It is easy to handle as you can apply filters to refine your search, click on the “eye” icon to preview the data, or just select the dataset to obtain its description.

These data are protected by a Creative Commons Attribution 4.0 international licence, which means you can share and even modify them provided that you document any change, mention the original data source and give a link to the licence text (https://data.icos-cp.eu/licence). It is of course necessary to cite  ICOS when you use the data. To make this as easy as possible, the citation is provided when you download the data set.

On the website of the Atmosphere Thematic Center, you can also find near real time data that are computed from all ICOS atmospheric stations every day in the morning. For example, Figures 5 and 6 show time series of the fraction of CO2 (top plot) and CH4 (bottom plot) in the air mass coming from the European continent measured at Mace Head station (MHD). Depending on the wind direction, this atmospheric station, located on the west coast of Ireland, is exposed to either the North Atlantic Ocean air mass and or the European continental air mass, offering a unique way to study these very different air masses. Time series for the period 2011-2017 show a clear upward trend for both greenhouse gases in the continental air mass. These increases are mainly caused by growing emissions associated to human activities.

Figure 5: CO2 molar fraction in continental air mass between 2011 and 2017 at Mace Head atmospheric station (ICOS ERIC, https://icos-atc.lsce.ipsl.fr/P0031.1)

Figure 6: CH4 molar fraction in continental air mass between 2011 and 2017 at Mace Head atmospheric station (ICOS ERIC, https://icos-atc.lsce.ipsl.fr/P0031.1)

 

Hopefully, this post helped you to get to know ICOS better. Do not hesitate to use this great tool in the future!

Find out more about ICOS

https://www.icos-ri.eu/

For those interested, the 3rd ICOS Science Conference will take place between the 11th and the 13th of September 2018 in Prague, Czech Republic.

Edited by Gabriele Messori and Célia Sapart