GeoLog

GeoLog

How to forecast the future with climate models

How to forecast the future with climate models

Our climate is constantly changing, and with the help of simulation modelling, scientists are working hard to better understand just how these conditions will change and how it will affect society. Science journalist Conor Paul Purcell has worked on Earth System Models during his time as a PhD student and postdoctoral researcher; today he explains how scientists use these models as tools to forecast the future of our climate.

While we can’t predict everything about our future, climate scientists have a good understanding of how our environment will look and feel like in the coming years. Researchers and climate specialists predict that temperatures will increase dramatically in the 21st century, ranging between 1.5°C and 4°C above pre-industrial levels, depending on your location and the amount of carbon dioxide pumped into the atmosphere in the near future. Forecasts of future drought and flood risk, at both regional and global bases, are also provided by climate experts.

Understanding how such features of Earth’s changing climate may manifest, and ultimately impact on our society, takes considerable international collaboration – a collaboration which is largely based around the results of climate modelling. That’s because climate predictions for the future are made using sophisticated computer models, which are built around mathematical descriptions of the physical and biological processes that govern our planet.

These models have become so complex in recent years that they are now referred to as Earth System Models (ESMs). Using ESMs, climate modellers can create simulations of the planet at different times in the future and the past. ESMs are in fact the only tools we have for simulating the global future in this sense. For instance, if we want to know how our climate may look like one hundred years from now, how ocean acidification levels may change and how this might impact ocean life, or how plants will respond to increasing levels of atmospheric carbon dioxide, ESMs are the only tool available.

The models are built in components, each representing a separate part of the Earth system: the atmosphere, the ocean, the land surface and its vegetation, and the ice-sheets and sea-ice. These are constructed by coding each component with the mathematics that describes the environmental processes at work.

Climate models are systems of differential equations based on the basic laws of physics, fluid motion, chemistry, and biology. Pictured here is a schematic of a global atmospheric model. (Credit: NOAA, via Wikimedia Commons)

For example, the winds in the atmosphere are described by the mathematics of fluid motion. Model developers translate these mathematical equations into code that computers can understand, like giving them a set of instructions to follow. Supercomputers can then interpret the code to simulate how winds, for example, are expected to develop at each global location through time. The results are usually plotted on world maps.

As scientists have learned more about our Earth’s systems over time, the complexity of these individual models has been ramped up dramatically. For example, the land surface and vegetation model components become more sophisticated as plant biologists understand more and more about how plants transfer water and carbon between the land and atmosphere.

And it’s not just one giant solo project either: there are tens of ESMs and hundreds of subcomponent models developed and used at research centres around the globe. Collaboration between these facilities is a necessary part of progress, and information is shared at international conferences ever year, like the American Geophysical Union’s Fall Meeting in the United States and the European Geosciences Union’s General Assembly in Europe.

This means that developments are always been made towards increasing the realism of ESMs. On the horizon such developments will include increasing the resolution of the global models for improving accuracy at regional locations, and also incorporating the results from the latest research in atmospherics, oceanography and ice sheet dynamics. One example is research into plants, specifically how they interact with carbon dioxide and water in the atmosphere. Further understanding of this biological process is expected to increase the realism of models over the coming years and decades. In general, improvements to the accuracy of model simulations can help to help society in the future. For example, models will be able to help predict how climate change may impact, say, water scarcity in South Africa, wildfire risk in the western United States, or crop yields in Asia. Indeed, the ESMs of the future should boast incredibly accurate simulations and prediction capabilities unheard of today.

By Conor Purcell, a Science & Nature Writer with a PhD in Earth Science

Conor Purcell is a science journalist with a PhD in Earth Science. He is also founding-editor of www.wideorbits.com and is on twitter @ConorPPurcell and some of his other articles at cppurcell.tumblr.com.

Imaggeo on Mondays: Cumulonimbus, king of clouds

Imaggeo on Mondays: Cumulonimbus, king of clouds

This wonderful mature thunderstorm cell was observed near the German Aerospace Center (DLR) Oberpfaffenhofen. A distinct anvil can be seen in the background meanwhile a new storm cell is growing in the foreground of the cumulonimbus structure. Mature storm cells like this are common in Southern Germany during the summer season. Strong heat, enough moisture, and a labile stratification of the atmosphere enables the development of this exciting weather phenomenon.

Description by Martin Köhler, as it first appeared on imaggeo.egu.eu.

Imaggeo is the EGU’s online open access geosciences image repository. All geoscientists (and others) can submit their photographs and videos to this repository and, since it is open access, these images can be used for free by scientists for their presentations or publications, by educators and the general public, and some images can even be used freely for commercial purposes. Photographers also retain full rights of use, as Imaggeo images are licensed and distributed by the EGU under a Creative Commons licence. Submit your photos at http://imaggeo.egu.eu/upload/.

Living in a new Age

Living in a new Age

If you were suddenly told you were living in a different time period, what would your immediate reaction be? Changes in the calendar – even if it’s just terminology – have proven emotive in the past. In 1752, when England shifted from the Julian to Gregorian calendars, and 11 days were cut from 1752 to catch up, there are suggestions that civil unrest ensued.

Once again, the name of the period in which we live has recently changed; the Holocene is now subdivided into three parts, and we’re now living in the Meghalayan age, according to the International Commission on Stratigraphy (ICS). While there weren’t riots in the streets this time, it has proved controversial for some researchers.

The division of time into different epochs and eras is an important part of stratigraphy. While time marches on, ignorant of the names humans give to its divisions, defining periods like the Cretaceous and Jurassic helps scientists compare results from around the world, even where the fossil and sedimentary records differ. It also draws into sharp focus the globally significant differences between each period, often including the devastating mass-extinctions that mark the boundaries of a handful of these periods.

The Holocene has been for at least a century the term favoured to describe the period in which we live, with its beginning marked by the end of the last ice age. The date at which the Holocene began has been more and more closely defined by experts over time, to the now accepted value of approximately 11,650 calendar years before present. The Holocene period encompasses the emergence of human civilisation, and represents a period of relatively warmer, somewhat stable climate in comparison with the prior ice age.

After considerable debate, however, the ICS has decided that the Holocene should be further subdivided; now, the period from 11,650 and 8,200 years before present is the Greenlandian; the Northgrippian stretches from 8,200 to 4,200 years before present, and the Meghalayan defines the time between then and the present. Why did the Holocene need to be divided up as such? If it wasn’t broken, why fix it?

The International Commission on Stratigraphy (ICS) has updated the timeline for the earth’s full geologic history, dividing the Holocene into three distinct periods. What does that mean for the Anthropocene? (Credit: International Commission on Stratigraphy)

The distinctions between an ice age and a warmer period, also known as an interglacial period, are globally significant, and a good place to start when describing how Earth’s climate has changed over the past few hundred thousand years. The swings in global temperature and ice extent are large enough that we often ignore the subtler climate changes that occur within an interglacial or glacial period. However, sediment and fossil records from more recent eras are relatively well preserved (simply because those records have had less chance to be destroyed by other geological processes), and this enables us to explore more recent periods in finer detail. Looking within the Holocene, the transition between the Greenlandian and Northgrippian is marked by a dramatic cooling of the climate, while the Northgrippian – Meghalayan by an abrupt ‘mega-drought’ and cooling that affected the nascent agricultural societies developing at that time.

By dividing the Holocene into these bite-sized chunks, the ICS has drawn attention to these changes in the earth’s geological system and provided a global context to the climatic shifts of the last ten thousand years. It also helps emphasise that climate can and does change on timescales more abrupt that glacial-interglacial periods – something we need to remember when considering the likely effects of anthropogenic climate change.

So far, so scientific. So why have the changes upset some people? Well, there’s an elephant in the stratigraphic room that looms larger now that these changes have been officially ratified. If there’s anything that has marked out the Holocene as fundamentally different from other historical ages, it’s the growth of human society. In particular, we are now at a point in history where the actions of a specific species – humans – can have global effects on the stratigraphic record.

Humans have added large quantities of carbon dioxide to the atmosphere, sown radioactive isotopes across the oceans from nuclear bomb testing, and left waste deposits in environments from the top of Mt Everest to the middle of the Pacific Ocean. Many of these impacts could leave lasting traces in the sedimentary and fossil records, leading to some scientists calling for a new period of time – the Anthropocene. And this may not fit well with the ICS changes.

I spoke with Helmut Weissert, President of the EGU Stratigraphy, Sedimentology and Palaeontology Division about these changes, and he suggested that the new changes devised by the ICS might shift the debate over the Anthropocene, at least in the short term:

I am quite worried. After the introduction of the new subdivisions I cannot see how the Holocene working group soon will vote for a further subdivision of the Holocene. The Anthropocene working group is confronted with a difficult task. I can envisage that the Anthropocene will be used as an informal term, not officially defined and introduced into the Stratigraphic Chart. I use the term regularly in my writing and in talks, everybody understands the term, I can explain how man is a geological agent. So, we may have to continue using an excellent term which is not yet properly defined, but most people do not care about the definition.

The Anthropocene is certainly an effective term to draw the attention of the wider public to the impact of society on global geological cycles. But from a stratigraphic perspective, it offers a number of challenges. Where and when, for example, should the beginning of the period be set? Changes in geological periods require specific chemical changes that can be identified globally and an internationally agreed upon reference point – a physical location – that defines the base of the section. There are many potential examples that could be chosen to define the beginning of human interference in the natural system; ice cores showing the uptick in carbon dioxide at the industrial revolution, or ocean sediments attesting to nuclear bomb tests in the 1950s. But the choice of which section to pick is fraught.

Each stratigraphic division needs a reference point that defines the split between the prior time period and the one in question. Here, a ‘golden spike’ defines the base of the Ediacaran period (635 million years ago) in the Flinders Ranges of South Australia. (Credit: Bahudhara via Wikimedia Commons)

Moreover, preservation is a crucial part of stratigraphy; how much of human impact will in fact be preserved, especially after further anthropogenic changes? What if we clean up the environment? What if we dredge the ocean floor for rare metals, and, in doing so, extirpate the signal of the 1950s nuclear bomb tests? What if we melt the ice caps that record the incipient CO2 increases from the industrial revolution? Sure, these changes may be recorded elsewhere, but how can we be sure a reference stratigraphic section will remain intact?

And this brings us to a perhaps more philosophical point: what if the human impact on the natural system we see today is only a fraction of what is to come? Any Anthropocene we define now would be based only upon the impact to date, but future changes may make these seem small in comparison. What would come after the Anthropocene? The question echoes that of 20th century philosophers, asking what comes after Post-Modernism? Perhaps instead of stratigraphy, we should look to written history and recorded data to better contextualise our impact.

Whether we end up defining our current era as the Meghalayan, the Anthropocene, or something else, it seems clear that the debate has drawn increased attention to the short-term climate changes – and in particular those driven by human intervention. A better public appreciation of our role within the natural system is a vital step in limiting damaging future climate change.

by Robert Emberson

Robert Emberson is a Postdoctoral Fellow at NASA Goddard Space Flight Center, and a science writer when possible. He can be contacted either on Twitter (@RobertEmberson) or via his website (www.robertemberson.com)

GeoPolicy: Bridging the gap between science and decision makers – a new tool for nuclear emergencies affecting food and agriculture

GeoPolicy: Bridging the gap between science and decision makers – a new tool for nuclear emergencies affecting food and agriculture

Amelia Lee Zhi Yi, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture

The International Atomic Energy Agency (IAEA) has developed an online system to assist in improving the response capabilities of authorities in the event of an emergency caused by natural hazards. The Decision Support System for Nuclear Emergencies Affecting Food and Agriculture (DSS4NAFA), provides a clear overview of radioactive contamination of crops and agricultural lands through improved data management and visualisation, it also assists in decision support processes by suggesting management actions to decision makers. In this interview, we have the pleasure to introduce Ms Amelia Lee Zhi Yi, working at the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture to speak about DSS4NAFA.

Nuclear Emergency Response (NER) for food and agriculture – why is that important and what does it entail?

In the event of a nuclear or radiological emergency, the response should be swift in the interest of human health. After ensuring the well-being of the population, it is necessary to prioritise the assessment of possible radioactive contamination of crops and agricultural lands to avoid ingestion of radioactivity.

Proper data management, data visualisation and risk communication are essential for efficient response to a nuclear emergency. Factors that should be considered for such response include support for sampling and laboratory analysis, optimal allocation of manpower and analytical instruments, and integrated communication between stakeholders.

To make well-informed decisions on for instance planting and food restrictions, food safety authorities need to have a good understanding of the radiological conditions after a fallout event. This is accomplished through the collection of environmental samples such as soil and plants, and food products that are then analysed using consistent methods in qualified laboratories. Further, these data should be displayed in an intuitive manner so that authorities will be able to interpret the data under stressful, time-bound conditions. Finally, to reduce confusion and clearly communicate decisions made to the public, standardised communication protocols of the decisions made by policymakers need to be implemented.

How can technology assist us in this process? What is DSS4NAFA?

Innovative information technology (IT)-based methods can assist in optimising processes in NER. Some examples include streamlining data transfer using cloud-based platforms paired with mobile technologies, facilitating decision making using advanced visualisation tools, and communicating risk to the public using pre-defined correspondence templates.

The Decision Support System for Nuclear Emergencies Affecting Food and Agriculture (DSS4NAFA), is a cloud-based IT-DSS tool developed by the Soil and Water Management & Crop Nutrition Laboratory, under the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. While it was originally developed as a system for nuclear emergency response management and communication, its ability to discern data quality, to provide user-friendly spatio-temporal visualisations for decision makers, and ease in creation of communication materials makes it a good candidate tool for usage in natural hazard risk mitigation.

The beta version of DSS4NAFA is planned to be released in August 2018 for testing by volunteer member states.

General overview of how DSS4NAFA works. After a nuclear or radiological fallout event affecting food and agriculture, the system assists decision makers by allocating samplers and laboratories according to proximity, allows for data to be input into a mobile device and sent to a cloud server immediately, and visualises data for intuitive decision making (Source FAO-IAEA).

How does DSS4NAFA support public authorities in emergencies?

DSS4NAFA contains modules which provide logistical support to decision makers in defining sampling location, sampler allocation and laboratory allocation. It also functions as a powerful visual interpretation tool that brings together multi-dimensional data usually handled to make decisions on planting and food restrictions in a nuclear emergency response situation.  Some of the functionalities of the modules are as below:

Data management:

  • Standardised data input with pre-determined data entry fields and format
  • Data housed within one server to ensure ease of data analysis
  • All data collected in the field using mobile devices and are sent directly to the server

Data visualisation:

  • GIS based visualisation for instinctive understanding of situation on the ground
  • “Logmap” for at-a-glance sampler and laboratory analyses status
  • Comprehensive information, such as current and historical decision actions, intuitively displayed on the Food Restriction Dashboard

Logistics and decision support:

  • Sampling assignments proposed based on crop calendar and land use type
  • Food and planting restrictions suggested based on the movable levels set by authorities
  • Public communication module

 

The Food Restriction Dashboard is a platform in DSS4NAFA whereby radioactivity information is collated considering the spatial distribution and time resolution of the accident, and suggests food and planting restrictions based on the level of risk and the specified tolerance levels (Source FAO-IAEA).

What feedback did you get from real users during the design/development of the DSS?

The development of DSS4NAFA was highly iterative and findings from the process were invaluable. Some lessons learned during its development include the necessity for stakeholder involvement during the design process, the usage of a “one-house approach” for centralised data, and the importance of building a tool that is flexible enough to be used during emergency response and routine monitoring operations.

The system has generated a lot of interest when shown during several IAEA workshops and conferences such as at EGU, indicating the need for this type of system.

What do you think will be the main challenges in the application of the DSS4NAFA?

Two challenges are foreseen in the deployment of DSS4NAFA. The first is to explain the benefits of the system to countries with pre-existing Nuclear Emergency Response systems. We are confident that we can succeed as DSS4NAFA is modular, thus Member States can select and implement the components that suit their needs best.

Secondly, there could be some learning associated with the implementation of DSS4NAFA. To facilitate this process for governmental data analysts, user experience will be one of the major focus for improvement during the beta testing phase. We strive to develop DSS4NAFA such that the system will be intuitive for use to its fullest potential, even with minimal prior training.

The development of DSS4NAFA is part of the Joint FAO/IAEA Division Mandate in Preparedness and Response to Nuclear and Radiological Emergencies Affecting Food and Agriculture to promote the management of intra- and interagency emergency preparedness and response to nuclear accidents and radiological events affecting food and agriculture, including in the application of agricultural countermeasures.

by Jonathan Rizzi, Norwegian Institute of Bioeconomy Research

Jonathan Rizzi is the incoming ECS representative for the EGU’s Natural Hazard division. He has a bachelor in GIS and Remote Sensing and a master and a PhD in Environmental Sciences. He is a researcher at the Norwegian Institute of Bioeconomy Research and has worked in the field of climate change and risk assessment for the last several years.

Editor’s Note: This post first appeared on the EGU Natural Hazards (NH) Division blog. Read the original post here.