CL
Climate: Past, Present & Future

Climate of the Future

Habits in numerical model construction

Habits in numerical model construction

 

Numerical models are omnipresent in climate research. Constructed to understand the past, to forecast future climate and to gain new knowledge on natural processes and interactions, they enable the simulation of experiments at otherwise unreachable time and spatial scales. These instruments have long been considered to be fed – let even determined – by either theories or observations alone. But are they really? Sociological factors are at play too. It is precisely these influences that the present blog entry attempts at presenting through the review of cornerstone sociological studies and new anthropological insights on decision-making of model builders.

 

What we here define as numerical models are computer simulations of processes occurring within a system. Global Circulation Models (GCMs), the climate models used to simulate the Earth’s climate and its response to changes of greenhouse gas concentrations in the atmosphere, are certainly the most widely known representatives of this category of scientific instruments. However, numerical models of all types abound in research fields associated to the study of climate change. Some require to be run on national supercomputing centres, while others run on a laptop. Some might be the product of entire research teams, while others result from individual endeavour. Some simulate climate processes at a global scale, while others focus on gas bubbles in a cubic meter of peat. The term “numerical model” employed hereafter embraces this diversity.

Models have long been considered by philosophers to be logically deduced from theory and observations alone. In the late 1990s, Morgan and Morrison (1999) initiated a new turn by insisting on the partial autonomy of models from theory and observations, arising from the diversity of their ingredients. It is this partial autonomy, the authors claim, which enables us to gain knowledge through models about theories and the world. Simultaneously, several studies of the domain of Science and Technology Studies (STS) attempted to address the actual activity of modelling through immersion within climate research institutes, hereby unveiling complex interactions between actors, institutions and stake-holders.

 

Why does it matter?

Model outputs are widely used as a basis for decision-making, at local up to international level. In a context of political and public defiance, much attention has been devoted to increasing the credibility of numerical models. Model intercomparison projects, uncertainty assessments and “best practice” guidelines have flourished. Yet improving modelling activity also necessarily goes through understanding and analysing current practices. Empirical studies addressing actual practices are a requisite to improve reflexivity – and thereby control – over implicit mechanisms likely to impact model outputs. The question ought hence to be how models are actually constructed, before addressing how they should be.

 

What we already know…

Current practices can be analysed at different levels, from individual to institutional. Most empirical sociological studies have so far investigated the interface between decision-making spheres and modelling, as well as institutional and disciplinary cultures of modelling. Shackley (1999) notably identified different “epistemic lifestyles” within global circulation modelling, which appeared to be influenced among others by the role, location, objectives and funding of the organisation within which the modelling project was conducted. He further highlighted with fellow researchers the influence of the modeller’s own perception of the policy process on his or her modelling practices (Shackley et al., 1999). Similar assertions have been made in neighbouring fields (such as hydrology, land use change and integrative assessment modelling), but rarely based on empirical material.

 

 … and what we know less

 The majority of existing studies considered the epistemic stance of modellers – their perception of the modelling activity, of its objectives and of the modellers’ own roles, as well as the modellers’ positioning with respect to particular issues encountered in climate modelling. However, how modellers really make choices during the construction of their model – and which factors influence these – has barely been examined. This is exactly what we aimed to scrutinize through our study presented at the EGU General assembly 2018.

 

Decisions in model construction

Studying decision-making within the process of model construction implies to assume that choices have to be made. As straightforward as this claim might seem, the very existence of choices has been largely absent from philosophical reflexion upon modelling during the 20th century and remains yet to be granted appropriate attention. Modellers however make, sometimes in an iterative manner, a plethora of choices during the model building activity. The temporal and spatial scales need to be selected and along with them the natural processes at play, their interactions, their representation through physical equations or parameterization, their numerical implementation, the source of data, the hardware and software at use, etc. (Babel et al., 2019).

The following video summarizes the rationale behind our research and the approach we used.

Choosing how to represent a natural process

We decided to focus on one particular type of choice: the representation of natural processes through equations (time transfer functions) and their numerical implementation. Even when modellers have selected to simulate a particular natural process (evapotranspiration, for example), several representations of one and the same process can generally be contemplated (Guillemot, 2010). We then asked ourselves on which basis modellers choose one representation and not another. We expected mostly technical aspects to come forefront, such as the required data or software and hardware limitations. These indeed take a prominent place in specialized literature.

 

Interviewing modellers

We adopted a well-established methodology from social sciences based on semi-directed interviews, which were conducted with researchers who developed a model from its earliest stage on. The interviewees were not aware of the exact subject of the interview. Prior to the interviews, we identified in the literature accompanying the presentation of the models one or several processes for which no justification was given on the reasons leading to the choice of the employed representation. After introductive, general questions on the modelling project, the researchers were invited to explain the use of this particular representation.

All interviews were recorded and transcribed. The interviewees came from five universities or research institutes located in four different countries in Europe and Northern America. All but one were senior scientists. A diversity was sought among the types of models (from highly complex ones to models openly described as simplistic) and scientific disciplines, ranging from ecology to geochemistry. With the exception of astrophysics, which we included in order to test a hypothesis not detailed in the present blog entry, all models were devoted to research questions associated to climate change. A total of 14 interviews were conducted.

 

The role of actors – or what we did not expect

As stated above, we expected the modellers to justify the choice of representations with mostly technical constraints. They did not. Rather, the narratives granted particular emphasis to actors – colleagues, professors, PhD directors – who belonged to the modeller’s network during the construction of his or her model. Many of the interviewees had started building their model (which they nowadays continue developing) as doctorate students. The use of a certain process representation was often explained as having been transferred by the (PhD) research director or colleagues. Two decades later, the representation was still part of the model – and modelling practices of these actors played a paramount role in the modellers’ justification of its use, even in competitive and controversy-laden contexts (Babel et al., 2019).

 

From transfer to habit

Many of the interviewees were surprised to be asked about an equation or a numerical scheme they did not perceive to be a distinctive, novel feature of their model. Even if other alternatives to the process representation existed in all the analysed cases, they did not necessarily consider to have made a choice. The choice had often been made by others at the very beginning of their career and transferred to them by their PhD directors or colleagues. They incorporated it in their own practices, a process one of the interviewees described as a “natural evolution”.

(…) during my PhD, my PhD director was only working with [this process representation]. And so I was educated with it. And so I couldn’t imagine doing something else (…) And so after my thesis, I naturally evolved with this approach because it was what I knew. It was a natural evolution, it is as… yes, when we can speak a language, we evolve with this language. So here it is a bit the same, I knew how to speak [this process representation] and so I naturally kept on evolving with this approach. But it is true that… yes, it is the main reason, I believe “  (interview quoted in Babel et al., 2019).

The natural evolution this interviewee referred to can be equated to a path dependence. The modeller developed skills and expertise through the repeated use of the representation, which rendered its implementation increasingly evident in the course of his career. We employed the sociological concept of habit, notably analysed by Latour (2013) to describe the progressive incorporation of choices becoming self-evident practices.

Figure 1. Illustration of the transfer of natural processes representations and incorporation within modelling practices.

 

Habits are required – but self-reflexivity over them too

While the term has often a negative overtone in everyday language, habits can be considered as deeply necessary. As stated by Latour (2013), these smooth out the course of actions: a modeller who would constantly re-consider, on a daily basis, the use of a programming language, a database or a certain variable would lose herself in perpetual decision-making requiring both attention and time. By repeating actions without engaging on new paths – by evolving with the same language, as the interviewee quoted above put it – we gain in efficiency and expertise. Yet, a danger is looming: that of falling into automatisms, losing sight and control over the initial crossroad and hence the ability to reverse, whenever necessary, our paths of actions. Questioning modelling habits and tracking them back to their roots – both on an individual and a collective basis – appears an unavoidable step to gain a better understanding over existing modelling practices.

 

Collectives may reinforce path dependence

The modellers interviewed during our study displayed striking consciousness of their process representation being often particular to a certain collective (a “school” or “field”) they nowadays identified with. By transferring them with a process representation, their director or colleague had also anchored them within a network: that of scientists using the same representation. This anchoring, which was often unconscious at the beginning of their career, could act as reinforcing the path dependence. Changing of process representation would not only often necessitate considerable effort and time to reach the same level of efficiency and expertise gained over the years, but also imply to turn away from a collective within which the modellers had established themselves (Babel et al., 2019).

 

A word of caution

Our study does not describe modellers as being determined as habits. Rather, we aimed at shedding light on inter-individual and collective influences within the modelling process often disregarded in field-specific literature. We assume habits to play a role among other factors. The fact that these other (computational, cost-related) factors were rarely mentioned by the modeller during the interviews could be explained by them being perceived as evident or self-speaking; additional studies would be required to explore the intertwinement of other triggers of model decision with inter-individual and collective influences.

Finally, this study was based on a limited number of interviews: we did not seek for exhaustivity or generalizations, but for case studies enabling a first glance on rarely studied processes.

This post has been edited by Janina Bösken and Carole Nehme.

REFERENCES
Babel, L., Vinck, D., Karssenberg, D. (2019). Decision-making in model construction: unveiling habits. Environmental Modelling and Software, 120, in press. DOI: https://doi.org/10.1016/j.envsoft.2019.07.015

Guillemot, H. (2010). Connections between simulations and observation in climate computer modeling. Scientist’s practices and “bottom-up epistemology” lessons. Stud. Hist. Philos. Sci. Part B Stud. Hist. Philos. Mod. Phys., Special Issue: Modelling and Simulation in the Atmospheric and Climate Sciences 41, 242–252.

Latour, B. (2013). An Inquiry into Modes of Existence. An Anthropology of the Moderns. Harvard University Press, Cambridge, Massachusetts.

Morgan, M.S., Morrison, M. (1999). Models as Mediators: Perspective On Natural and Social Science. Cambridge University Press, Cambridge.

Shackley, S. (2001). Epistemic Lifestyles in Climate Change Modelling, in: Edwards, P.N. (Ed.), Changing the Atmosphere: Expert Knowledge and Environmental Governance. MIT Press, Cambridge, Massachusetts.

Shackley, S., Risbey, J., Stone, P., Wynne, B. (1999). Adjusting to Policy Expectations in Climate Change Modeling. Clim. Change 43, 413–454.

 



			

		

What can artificial intelligence do for climate science?

What can artificial intelligence do for climate science?

What is machine learning?

Artificial Intelligence, and its subfield of machine learning, is a very trending topic as it plays an increasing role in our daily life. Examples are: translation programs, speech recognition software in mobile phones and automatic completion of search queries. However, what value do these new techniques have for climate science? And how complicated is it to use them?

The idea behind machine learning is simple: a computer is not explicitly programmed to perform a particular task, but rather learns to perform a task based on some input data. There are various ways to do this, and machine learning is usually separated into three different domains: supervised learning, unsupervised learning and reinforcement learning. Reinforcement learning is of less interest to climate science, and will therefore not be touched upon here.

In supervised learning, the computer is provided both with the data and some information about the data: i.e. the data is labeled. This means that each chunk of data (usually called one sample) has a label. This label can be a string (e.g. a piece of text), a number or, in principle, any other kind of data. The data samples could be for example images of animals, and the labels the names of the species. The machine learning program then learns to connect images with labels and, when successfully trained, can correctly label new images of animals that it has not seen yet. This principle idea is sketched in Figure 1.


Figure 1: A schematic of supervised machine learning in climate science.

In a climate context, the “image” might be a rough global representation of for example surface pressure, and the label some local phenomenon like strong rainfall in a small region. This is sketched in Figure 2. Some contemporary machine learning methods can decide which features of a picture are related to its label with very little or no prior information. This is well comparable to certain types of human learning. Imagine being taught how to distinguish between different tree species by being shown several images of trees, each labeled with a tree name. After seeing enough pictures, you will be able to identify the tree species shown in images you had not seen before. How you managed to learn this may not be clear to you, but your brain manages to translate the visual input reaching your retina into information you can use to interpret and categorize successive visual inputs. This is exactly the idea of supervised machine learning: one presents the computer with some data and a description of the data, and then lets the computer figure out how to connect the two.

 

FIgure 2: example of using machine learning for predicting local rainfall.

 

In unsupervised learning, on the other hand, the machine learning program is presented with some data, without any additional information on the data itself (such as labels). The idea is that the program searches autonomously for structure or connections in the data This might be for example certain weather phenomena that usually occur together (e.g. very humid conditions in one place A, and strong rainfall in place B). Another example are “typical” patterns of the surface temperature of the ocean. These temperature patterns look slightly different every day, but with machine learning we can find a small number of “typical” configurations – which then can help in understanding the climate.

How difficult is it to implement machine learning techniques?

Machine learning techniques often sound complicated and forbidding. However, due to the widespread use of many machine learning techniques both in research and in commercial applications, there are many publicly available user-ready implementations. A good example is the popular python library scikit-learn1. With this library, classification or regression models based on a wide range of techniques can be constructed with a few lines of code. It is not necessary to know how the algorithm works exactly. If one has a basic understanding of how to apply and evaluate machine learning models, the methods themselves can to a large extent be treated as black-boxes. One simply uses them as tools to address a specific problem, and checks whether they work.

What can machine learning do for climate science?

By now you are hopefully convinced that machine learning methods are: 1) widely used and 2) quite easy to apply in practice. This however still leaves the most important question open: can we actually use them in climate science? And even more importantly: can they help us in actually understanding the climate system? In most climate science applications, machine learning tools can be seen as engineering tools. Take for example statistical downscaling of precipitation. Machine learning algorithms are trained on rainfall data from reanalyses and in-situ observations, and thus learn how to connect large-scale fields and local precipitation. This “knowledge” can then be applied to low-resolution climate simulations, allowing to get an estimate of the local precipitation values that was not available in the original data. A similar engineering approach is “short-cutting” expensive computations in climate models, for example in the radiation schemes. If trained on a set of calculations performed with a complex radiation scheme, a machine learning algorithm can then provide approximate solutions for new climatic conditions and thus prevent the need to re-run the scheme at every time-step in the real model simulation, making it computationally very effective.

However, next to this engineering approach, there are also ways to use machine learning methods in order to actually gain new understanding. For example, in systematically changing the extent of input data, one can try to find out which part of the data is relevant for a specific task. For example “which parts of the atmosphere provide the information necessary to predict precipitation/windspeeds above a specific city and at a specific height?

As final point, artificial intelligence and machine learning techniques are widely used in research and industry, and will evolve in the future independent of their use in climate research. Therefore, they provide an opportunity of getting new techniques “for free”: an opportunity which can and should be used.

[1] scikit-learn.org

 

This article has been edited by Gabriele Messori and Célia Sapart.

Of butterflies and climate: how mathematics helps us to better understand the atmosphere

Applied mathematics is often seen as an obscure field, which the general public has no hope of ever understanding. In the context of climate science, this is far from the truth. In fact, many mathematical concepts and ideas applied to the study of the climate system stem from intuitive arguments. While their implementation can be very complex, understanding the basic ideas behind them does not require a PhD in Science.

The Lorenz 1963 attractor, often known as the “Lorenz Butterfly”. Author: Paul Bourke (http://paulbourke.net/fractals/lorenz/)

For example, this is the case of some recent developments in the field of dynamical systems analysis applied to atmospheric data. The atmosphere changes continuously and in many ways: for example, winds become stronger or die down, temperatures rise or fall and rain comes and goes. Understanding this evolution is important in many domains, from weather forecasting to air traffic management to catastrophe response services. The basic idea of the dynamical systems approach is to visualize the evolution of the atmosphere as a series of points connected with a line, which form a trajectory. The figure above shows a well-known example of such a trajectory: the so-called “Lorenz butterfly” (Lorenz, 1963). Now imagine focusing on a specific variable – for example daily surface temperature – and a specific region – let’s say Europe. We can build a trajectory, similar to the one shown above, describing the day-by-day properties of this two-dimensional (latitude by longitude, just as in a geographical map) temperature field. From day to day, the temperature varies therefore each day corresponds to a different point along the trajectory. In the case where two days are very similar to each other, they will correspond to two points very close together. On the contrary, if they show very different temperatures, the points will be further apart. If the similar days are well separated in time, for example occurring during different years, the trajectory representing surface temperature over our chosen region will therefore return close to a point it had previously visited, meaning that the closeness of the points and their distance in time do not always correlate. In the figure below, for example, the three turquoise dots are close to each other and also correspond to successive days along the atmospheric trajectory. The two red dots correspond to temperature configurations similar to those of the turquoise dots, but are separated from the latter by several days.

The continuous black line represents an idealized trajectory, while the circles correspond to successive days along the trajectory. The arrows indicate the direction the time goes.

This way of visualizing the atmosphere might seem bizarre, but it can give us some very powerful insights on how the climate system works. Consider, for example, summer heatwaves in Europe. The most severe ones can persist for several days and can have major impacts on human health, the environment and the economy. As can be intuitively understood, their persistence is due to the fact that the large-scale atmospheric conditions causing them are also persistent. If we return to our atmospheric trajectory, this will mean that we have a large number of points which are close to each other and successive in time – such as is the case for the three turquoise dots in the figure above. Namely, the trajectory moves very slowly and for several days the large-scale circulation only changes very slightly. In mathematical terms, this is a “sticky” state, and again the name is very intuitive! Analyzing the stickiness of the atmospheric states help us to predict how long a given circulation configuration is likely to last, thus providing useful information for weather forecasts.

The next natural step is to try to predict what the atmosphere will do once it has left a sticky state. Dynamical systems theory can again help us. It is in fact possible to define another quantity called “local dimension”, which tells us how complex the state of the atmosphere is. Once again the word “complex” here means exactly what you imagine: a complex temperature state will be one with lots of small, complicated spatial patterns. A simple state will be one with only a small number of large-scale features: for example, a day with high temperatures across the Mediterranean region and cold temperatures over most of Continental and Eastern Europe. Returning to our trajectory, these complex (or high-dimensional) and simple (or low-dimensional) states can be interpreted as follows. In the simple case, it is easy to predict the direction the trajectory will take in the future. This is the same as saying that all similar states evolve in a similar way. So if we want to forecast tomorrow’s temperature and we know that today is a “simple” state, we can look for states similar to today in the past years and we know the evolution of today’s state will be similar to that of these past states. In the complex case, on the contrary, it is very difficult to predict what the trajectory will do in the future. This means that similar atmospheric states will evolve in very different ways, and looking at past days with similar temperatures to today will not help us to forecast tomorrow’s temperature. A complex, high-dimensional state will therefore be more challenging for weather forecasters than a simple, low-dimensional one.

Now imagine looking at a very long climate dataset, for example covering the last century. If the climate system is always the same, one would expect the trajectories for the first and second half of the century to be indistinguishable. If, however, the climate is changing, then one would expect the trajectories representing it to also change. To make an analogy, imagine taking your heart rate. If you measure it on two different days while you are at rest, the number of heart beats per minute will probably be equal. In this case the system – which is here your body – is always in the same state. However, if one day you take your heart rate at rest and the following day you take it while you are running, the results will be very different. In this case something in the system has changed. In just the same way, if the climate system is changing, its “pulse” – namely the trajectory – will change with it. The trajectories of the two half-centuries in the dataset will therefore look different, and their local dimensions and stickiness will display different properties – for example a different mean value. The same two indicators that can help us improve weather predictions at daily to weekly timescales can therefore also help us to understand how climate varies across the centuries.

The dynamical systems approach can be applied to a wide range of scientific problems beyond the examples discussed above. These range from turbulence in fluids to the analysis of financial datasets. Picturing such a complex system as the atmosphere as a “spaghetti plot” is therefore an excellent example of an intuitive mathematical approach that can help us advance our knowledge of the world around us.

Edited by Célia J. Sapart.

Reference: Lorenz, E. N. (1963). Deterministic nonperiodic flow. J. Atmos. Sci., 20(2), 130-141.

Defrosting the freezer. Climate change and glacial meltwater

Defrosting the freezer. Climate change and glacial meltwater

 Why are glaciers important?

Glaciers cover around 10% of the global land surface. This includes the large ice sheets (e.g. in Greenland and Antarctica) as well as smaller ice caps and valley glaciers (e.g. in Iceland, Norway and New Zealand). Figure 1 shows the current distribution of glaciers around the world.

Figure 1 – The global distribution of glaciers around the world from the GLIMS glacier database. Source: https://nsidc.org/glims/

 

Glaciers play an important role in moderating global and local climate, but they are very sensitive to changes in climatic conditions. Currently, around 90% of the world’s glaciers are retreating. Under current IPCC predictions of future global warming and climatic changes, many glaciers will have disappeared by 2100. Figure 2 shows the temperature for different parts of the globe in 20167 relative to average (‘normal’) values. Red and yellow colours mean that temperatures are hotter than usual, and it is clear that most of the world is warming. The Arctic is warming especially quickly, and is several degrees (°C) warmer than normal. Glaciers here will therefore be especially sensitive to climate change.

Figure 2 – Global average (mean) surface temperature January-June 2016 relative to long-term conditions. Red and yellow colours indicate higher temperatures than normal. Source: https://svs.gsfc.nasa.gov/12305

 

Glaciers contain around 75% of the world’s freshwater. Many of the world’s rivers are fed by meltwater from glaciers and mountain snowpacks. These include major rivers such as the Ganges and Brahmaputra, where meltwater from Himalayan glaciers and snow makes its way downstream and, together with river water from other sources such as monsoon rains, eventually supplies over 1 billion people.

 

 

What are the key issues?

As climate change continues, and global air temperature rise leads to enhanced glacier melt, there are a number of key considerations:

How will glaciers respond to climate change? – Will they disappear?

How will glacier melt affect water flow downstream?

How quickly might these changes happen?

 

How will glacier melt affect river systems?

Here we consider some of the impacts of glacier retreat on river flow, but there are also many other impacts, including: changes to river water chemistry, and impacts on ecosystems – the plants and animals living in and around the rivers

  1. Turning on the tap

Increased glacier melt produces more meltwater, which means that rivers will have a higher flow and more water will be transported downstream. However, this situation is likely to last only temporarily, because…

2. Turning off the tap 

Eventually (usually over several decades or longer), if a glacier melts fully, there will be no meltwater feeding into rivers downstream. Some rivers, that are fed by water from multiple sources (such as rainfall) do not rely on glacial meltwater and will not be greatly impacted by the disappearance of glaciers in their headwaters. Other rivers, especially those in mountain catchments, are supplied only by snow and ice melt. The disappearance of glaciers would therefore have major impacts on their water supply – the equivalent of turning off a tap. We know that many glaciers are melting rapidly, and some are predicted to have disappeared over the next few decades.

3. Changing lanes 

In some places, as a glacier retreats, the meltwater streams may change course entirely and flow in a different direction. This has been seen recently in Alaska, where meltwater from the Kaskawulsh glacier has undergone a major transformation in its drainage pathway in the space of only four days. Meltwater previously flowed northwards, supplying the Slims River, but recent glacier retreat has caused a shift in the drainage pathway, and it is no longer favourable for the water to flow north, and the Slims has almost entirely disappeared. Instead, meltwater has been diverted towards the south to the Alsek river. This event has highlighted that major transformations in glaciers and river systems, in response to climate change, can happen in the blink of an eye. See a full news report on the changes here and the full research article here.

4. The four seasons

Climate change can also affect seasonality – the timing and duration of the seasons in a year. For example, with increased global warming, we might expect some parts of the planet to experience a longer warm season. Climate change might also affect the duration and intensity of precipitation (e.g. rain and snowfall) events and storminess. Changes in seasonality are already being felt in some parts of the world. In some parts of the Arctic, the Spring melt season, and therefore the onset of river flow, is starting earlier than it has done in the past. Such changes will influence when and in what quantities meltwater is transported downstream. Continued monitoring of climatic conditions, glacier and river behaviour will allow us to more fully understand the changes that are occurring in glacial environments in response to global temperature rise.

 

In summary

  • We know that global climate change is influencing glacier behaviour. Some glaciers are responding rapidly to climate change – over years and decades – and many will have melted completely by 2100.
  • As glaciers melt they produce more meltwater, which increases the flow of river systems downstream.
  • But if glaciers melt entirely, the meltwater ‘tap’ will be switched off. This may have major impacts on river systems that rely on meltwater inputs – such as in high mountain regions where meltwater is the dominant source of river water.
  • We have seen recently in Alaska, that glacier retreat can cause meltwater drainage to change direction in a matter of days.
  • Understanding glacier and river response to climate change is therefore key for our ability to prepare for future scenarios.

 

Helpful resources

The following links provide information, data, graphics, and videos about glaciers, glacier melt, meltwater, and climate change. There is something suitable for all age groups.

National Snow and Ice Data Centre https://nsidc.org/

NOAA http://www.noaa.gov/

INTERACT Arctic Monitoring programmes http://www.eu-interact.org/

NASA Climate https://climate.nasa.gov/