GD
Geodynamics

Geodynamics 101

How good were the old forecasts of sea level rise?

How good were the old forecasts of sea level rise?

Professor Clint Conrad

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. Our latest entry for the series is by Clinton P. Conrad, Professor of Geodynamics at the Centre for Earth Evolution and Dynamics (CEED), University of Oslo. Clint’s post reflects on the predictions of sea level rise since the first Intergovernmental Panel on Climate Change (IPCC) report in 1990 and the near three decades of observations and IPCC projections that have been made since then. Do you want to talk about your research area? Contact us!

This past week I flew over the North Atlantic with a direct flight to California from Europe. From the plane we had a beautiful view of glaciers on the western edge of the Greenland ice sheet, where the ice seems to be disintegrating into the ocean. We’ve been hearing lately that the ice sheets are slowly disintegrating – is this what that looks like? Using my mobile phone’s camera, I took a photo of the glacier that happened to be visible from my seat and compared it to images of the same glacier saved in Google Earth (Figure 1). This is an interesting exercise if you like looking at glaciers, but I can’t tell about the overall dynamics of the ice sheet this way.

Figure 1. A glacier on the west coast of Greenland on September 2, 2017 (left) taken with my iPhone. From my plane’s in-flight entertainment system, it seems that this glacier is between the villages of Upernavik and Niaqornat. For comparison, the image on the right is a screenshot of the same glacier from Google Maps.

Actually, we’ve been worried about ice sheet melting – and the sea level rise with it – for decades. I re-realized this during this past summer, as I finally started unpacking the boxes that we shipped to Oslo one year ago from Hawaii. Some of these boxes probably didn’t need to be unpacked, like the one labeled “High School Junk”, but it turns out there is interesting stuff in there! Here was my diploma, a baseball glove, some varsity letters, and a pile of old schoolwork – most of which I have no recollection of creating. But I did remember one of the items – a report on global warming that I wrote for Social Science class in 1989. In particular, I remember being fascinated by the prediction that human activity would eventually cause enough sea level rise to flood land areas around the world. For years, I have been personally crediting that particular high school report as being my first real introduction to the geosciences – but until this past summer I had never revisited that report to see what I actually wrote at the time. Now here it is – twelve yellowed pages of dot-matrix type, with side perforations still remaining from the printer feed strips that I tore off 28 years ago.

My report is entitled “Global Warming – What Must Government Do?” and now I can see that it is mostly a rehashing of reporting from a bunch of newspaper articles written in 1989. It was a bit disappointing that my younger self wasn’t more creative or inspirational, but the content of the report – really the content of the newspaper articles from 1989 – is fascinating because much of the material could have been written today. There is discussion of how the warmest years in recorded history have happened only recently, that climate skeptics were unwilling to attribute recent changes to human activity, and that a few obstinate countries (then, it was Japan, the USSR, and the USA) were standing in the way of international agreements to curb CO2 emissions. Another statement is also familiar: that “oceans could rise from 1.5 to 6.5 feet”. For those of you not familiar with that measurement system, that is about 0.5 to 2.0 meters! I know that recent predictions are not quite as dire as 2 m of rise (at least in the 2100 timeframe), although sea level acceleration has been getting more attention lately. Did people in 1989 consider 2 m of sea level rise a possibility? I checked the cited New York Times article from 1989, and indeed it seems that I dutifully reported the estimate correctly. The article says that 1.5 to 6.5 feet of sea level rise is expected “to occur gradually over the next century affecting coastal areas where a billion people, a quarter of the world’s population, now live”.

Figure 2. Projections of sea level in 2100 (relative to 1990 sea level) for the five IPCC reports between 1990 and 2013, plotted as a function of IPCC report date. Shown are the minimum and maximum projections (range of red bars), and the mean of estimates (black circles).

I have contributed a little to sea level research in the intervening years, and am somewhat familiar with the current predictions. I know that the most recent (2013) report of the Intergovernmental Panel on Climate Change (IPCC) predicts up to about a meter of sea level rise by 2100, which was a large increase over the 2007 report that predicted up to about 0.6 meters. Thus, meter-scale sea level rise predictions seemed like a relatively recent development, and yet here was a prediction just as large from nearly 30 years ago. What did the IPCC have to say about sea level at the time?

I plotted the sea level projections of the five reports that the IPCC has released between 1990 and 2013 (Figure 2). Indeed, the 1990 report predicted slightly higher sea level for the year 2100 (31-110 cm higher) than did the most recent report from 2013 (28-98 cm higher). In fact, the IPCC projections for 2100 sea level declined from 1990 through 2007, until they increased again in the most recent report in 2013 (Figure 2). Why is this? Well, we have nearly 3 decades of observations that could help us to answer this question!

 

Figure 3. Sea level projection from the IPCC’s first assessment report (1990), showing that report’s low, best, and high estimates (blue lines) and predicted rates in mm/yr. Also shown is the University of Colorado sea level time series (red line), which is based on satellite altimetry observations from 1992-2016 and records a sea level rise rate of 3.4 ± 0.4 mm/yr.

First, let’s evaluate the initial predictions of the first IPCC report from 1990. Since 27 years have passed since the publication of that report, we can actually compare a sizeable fraction of those 1990 predictions to actual sea level observations. Left, I have plotted (Figure 3) the 1990 report’s sea level projection from 1990-2100 (Fig. 9.6 of that report) along with actual sea level observations made using satellite altimetry between 1992 and 2016, which have been nicely compiled by the University of Colorado’s Sea Level Research Group. The comparison shows (Figure 3) that the actual sea level change for the past 24 years has fallen slightly below the “best” estimate of the 1990 report, and well above the “low” estimate.

In retrospect, the 1990 predictions of future sea level change seem rather bold, because the 1990 IPCC report also concludes that “the average rate of rise over the last 100 years has been 1.0-2.0 mm/yr” and that “there is no firm evidence of accelerations in sea level rise during this century”. Yet, the 1990 report’s projection of 2.0-7.3 mm/yr of average sea level rise from 1990-2030 (Figure 2), represents a prediction that sea level rise would accelerate almost immediately – and this acceleration actually happened! Indeed, three recent studies (Hay et al., 2015; Dangendorf et al., 2017; Chen et al., 2017) have confirmed sea level acceleration after about 1990.

Thus, the IPCC’s 1990 sea level projection did a remarkably good job for the first three decades of its prediction timetable, and the next 8 decades don’t seem so unreasonable as a result. What did the 1990 report do right? Here the 1990 IPCC report helps us again, by breaking down its projection into contributions from four factors: thermal expansion of the seawater due to warming, the melting of mountain glaciers, and changes in the mass of the great ice sheets in Greenland and Antarctica. The 1990 report makes predictions for the changes in sea level caused by these factors for a 45-year timeframe of 1985-2030, and I have plotted these predictions as a rate (in mm/yr) in Figure 4. Thermal expansion and deglaciation in mountainous areas were predicted to be the largest contributors. Greenland was predicted to contribute only slightly, and Antarctica was predicted to gain ice, resulting in a drop in sea level.

Figure 4. Comparison of projections and observations of the various factors contributing to global mean sea level rise (GMSL, in mm/yr). Red bars show predictions that were made in 1990 (table 9.10 of the 1990 IPCC report) for the 45-year period 1985-2030 (range is given by red bars and best estimate is shown with a dark line). Blue bars show the actual contribution from each factor for the 17-year period 1993-2010, as detailed in table 13.1 of the 2013 IPCC report. Note both the sum of observed contributions and the direct observation of sea level change from satellite altimetry (bottom two blue bars) are consistent with recent analyses of tide gauge data (Hay et al., 2015; Dangendorf et al., 2017), within uncertainty.

Now 27 years later, we have actual observations of the world’s oceans, glaciers, and ice sheets that we can use to evaluate the predictions of 1990 report. Many of these observations are based on measurements made using satellites, which can now remotely measure ocean temperatures, changes in the mass of land ice (mountain glaciers and ice sheets) and even changes in groundwater volumes, over time. The IPCC report from 2013 (the most recent report) shows these contributions in the timeframe of 1993-2010, which are 17 years during the 45-year outlook predicted by the IPCC’s 1990 report. I have plotted these observations in Figure 4, and we can see how the 1990 predictions compare so far – remembering that the prediction and observation timescales do not exactly align.

First, we see that 1990 report overpredicted the contribution from thermal expansion, and slightly overpredicted the contribution from mountain glaciers. Of course, there is still time before 2030 for these factors to increase some more toward the predictions made in 1990. However, we also see that Greenland melting has already matched the 1990 report’s prediction for 2030, and that the prediction of a sea level drop from Antarctica did not materialize – Antarctica contributed almost as much sea level rise as Greenland did by 2010 (Figure 4). Furthermore, there is another significant contributor to sea level rise – land water, which represents the transfer of liquid water from the continents into the oceans. This occurs because groundwater that is mined for human activities eventually ends up in the ocean. According to the 2013 report, land water caused more sea level rise than ice sheet melting from Antarctica.

Thus, in 2010 the predicted rates of sea level rise from two factors (thermal expansion and mountain glaciers) had not yet reached the 2030 predictions of the 1990 report, but the contributions from Greenland, Antarctica, and land water loss have already nearly met or exceeded the predictions of 1990. Indeed, recent satellite observations between 2002 and 2014 show an acceleration of melting in Antarctica (Harig et al., 2015) and especially in Greenland (Harig et al., 2016). The recognition that Antarctica and Greenland may contribute significantly more to sea level rise in the future compared to earlier estimates is reflected in the 2013 IPCC report (Figure 2).

Figure 5. A dike near the town of Putten in the Netherlands, where the recent EGU-sponsored “Nethermod” meeting was held in late August 2017. This dike is one of many in the Netherlands that protect negative-elevation land (left) from a higher water level (right).

So far, it seems that the IPCC’s 1990 sea level projection has stood the test of 27 years remarkably well (Figure 3). It is rather disheartening to realize that we are on track for the ~60 cm of sea level rise that the 1990 report predicted for the year 2100, or more if the early underestimates of ice sheet contributions prove to be more significant than any overestimates of thermal expansion (Figure 4). Looking at my own high school report from the same time, it is also disappointing that to realize that the warmest years in recorded history have again happened only recently, that climate change skeptics are still unwilling to attribute recent changes to human activity, and that there are still obstinate countries (well, one country) standing in the way of international agreements to curb CO2 emissions. On the other hand, high school students writing reports on this topic today will likely find discussions of dropping beachfront real estate prices, governmental planning for future sea level rise, and engineering techniques for managing future sea level rise (Figure 5). I hope that these students save copies of their reports in a format that they can examine decades later, because it is interesting to consider how predictions of future sea level rise have changed over time, and how society has been responding to the challenges of this geodynamic phenomenon that is operating on the timescale of a human lifetime. One day in the 2040s these students may want to scrutinize another quarter century of data against the projections of the next IPCC report, to be completed by 2022. I wonder what they will find?

 

The world’s largest magnet

The world’s largest magnet

The Geodynamics 101 series serves to show the diversity of topics and methods in the geodynamics community in an understandable manner for every geodynamicist. PhD’s, postdocs, full professors, and everyone in between can introduce their field of expertise in a lighthearted, entertaining manner and touch on some of the outstanding questions and problems related to their method of choice.
This week Maurits Metman, PhD student at the Deep Earth Research group at the University of Leeds in the United Kingdom, explains the dynamics of the core. Do you want to talk about your research area? Contact us!

Rock bottom

Approximately 3,000 km below our relatively minuscule feet lies the Earth’s core. It is our planet’s innermost and therefore most secluded region. It is also the primary source of Earth’s magnetic field that we observe here at the surface. With its dynamics, composition, magnetic field generation, and thermal history not yet completely understood, the core remains amongst the most enigmatic parts of the Earth. It has been established that the core can be partitioned in an inner and outer region, which have distinct physical and chemical properties. For example, the two regions are in a different state of matter: the inner core being solid and the outer core liquid. Therefore, it is the outer core that is of particular geodynamical interest – here we will touch upon some important aspects of the dynamics that take place within the outer core.

The outer core consists of an electrically conducting iron alloy liquid, which circulates throughout the outer core volume. In terms of the forces that drive these motions, there are similarities to the dynamics of other terrestrial systems such as the mantle, oceans, and atmosphere. For example, in all cases gravitational forces are overcome by the process of convection, through which relatively hot and buoyant material at the base of the system rises towards the surface, while elsewhere cold material sinks. Additionally, the flows in these systems are subject to forces due to pressure differences and those associated with the deformation of the material.

Figure 1: An impression of convection in the outer core (not to scale), which is aligned along columnar rolls, and flow in- and outside the tangent cylinder is separated (Credit: United States Geological Survey).

Nevertheless, the dynamics in the outer core are certainly different to other geophysical flows. For one, it is estimated that a typical velocity for outer core flow U ∼ 10-1 mm s-1, which is relatively high for the solid Earth. In fact, recent work has shown that these velocities may locally be as high as roughly 1 mm s-1 (Livermore et al., 2016). Additionally, rotational effects (e.g. centrifugal force, Coriolis effect) have a tremendous impact on the style of convection. This is for example not the case in the mantle, due to the fact that flow velocity is comparatively low there and viscosity is large. In this respect, the so-called Taylor-Proudman theorem provides an important constraint on the style of core motions, and states that for rapidly rotating systems flow is two-dimensional: it can not change parallel to the axis of rotation. More generally, the style of convection inside of the outer core is strongly cylindrical, in the sense that flow is aligned in ‘columnar rolls’ aligned with the axis of rotation (Fig. 1).

Magnetic soup

With its ability to generate a magnetic field, the outer core further distinguishes itself from other parts of the Earth. That this field must indeed be generated somewhere inside the Earth was already demonstrated by Gilbert (1600), but the fact that it is linked to core fluid flow remained unknown for centuries. We now know that the convective motion of the electrically conducting outer core liquid generates such a magnetic field. This conversion of kinetic to magnetic energy is a process that has fittingly been coined the geodynamo.

What clues do we have that this field must be generated internally? A relatively simple argument can be made from the age of the magnetic field, which paleomagnetic observations have shown to be over 109 year. However, if we were to assume there would be no field generation, the present-day field would decay through simple diffusion (or equivalently due to Joule heating of the fluid) on a timescale of 105 year, inconsistent with these observations. Therefore, it is required that some field generation in the outer core acts to sustain the magnetic field against diffusion, which can be accomplished with a specific core fluid flow there.

Initially, some rejected the existence of such a flow. One well known so-called anti-dynamo theorem is Cowling’s (1933), who showed that a steady and axisymmetric flow field can never maintain a magnetic field indefinitely, which led to the general consensus that sustained dynamo action through fluid flow would not be possible.

Figure 2: A schematic of magnetic field generation through the α-effect at different timesteps ti. Here, u, B and j represent fluid velocity, magnetic field and electric current density.

The development of the mean-field theory, which describes how small-scale flow perturbations can on average create a large-scale magnetic field, changed this. An example of such field generation is through the α-effect (Parker, 1955). In this case there is a rising and rotating flow (imagine a corkscrew-shaped motion) moving and twisting a magnetic field line (Fig. 2). The magnetic loops created this way induce an electric current parallel to the the field, which in turn generate a secondary magnetic field that is perpendicular to the initial field. A similar conversion the other way around is also possible, and a planetary dynamo that relies on these two processes is considered to be of the α2-type. Another source of field generation that follows from mean field theory is the ω-effect. This process is the bending of magnetic field lines, due to the differences in rotation rate, also creating magnetic field in a direction opposite to the initial direction (Fig. 3). A dynamo that generates a magnetic field through the α- and ω-effect is referred to as an αω-dynamo.

 

Therefore, it is quite clear that core flow and Earth’s internal magnetic field are deeply intertwined. As mentioned earlier, field generation through fluid motion and field diffusion are two competing processes that control time variations in the field. The magnetic Reynolds number is the ratio of the magnitudes of these contributions, and is defined as:

Figure 3: A schematic of the ω-effect which converts the magnetic field from the initial direction (aligned South-North) to a secondary direction (West-East and vice versa), at different timesteps ti. The solid and dashed curve represent the magnetic field and rotation axis, respectively.

Rm = UL / η

where η is the magnetic diffusivity and L is a length scale for the magnetic field (Roberts and Scott, 1965). For the outer core it is estimated that Rm ∼ 102, and therefore the diffusion term is often considered negligible (at least for relatively large length scales). This is referred to as the frozen-flux approximation. As the name suggests, magnetic field lines are then dynamically ‘frozen’ into the liquid, so that they evolve as though they were material line elements. How realistic is this particular scenario? From the above equation it should be clear that the frozen-flux approximation can break down if the typical length scale decreases. This may for example be the case for flux expulsion, i.e. when a radially expelled field is concentrated below the core-mantle boundary (Bloxham, 1986). This concentration increases the gradient of the field locally, enhancing radial diffusion. However, to what extent this process is realistic remains a subject for debate.

Forecast: reversals?

For the last two decades, advances in computing power have allowed numerical models to reproduce certain properties of the Earth’s magnetic field. For example, such models have been shown to exhibit magnetic polarity reversals (Glatzmaier & Roberts, 1995) and Rm that are similar to the outer core’s. Despite this numerical success and despite the fact that reversals have been documented extensively within the field of paleomagnetism, it remains unknown what physical process underlies these phenomena. This is particularly interesting as the most recent reversal occurred around 0.78 Myr years ago, which has led to speculation that a future reversal is imminent. Future numerical work, increases in computing power, better theoretical understanding of the internal dynamics of the core, and more geomagnetic observations may in time provide a physical explanation for these events.

References
Bloxham, J. (1986). The expulsion of magnetic flux from the Earth’s core. Geophysical Journal International, 87(2):669-678.
Cowling T. G. (1933) The magnetic field of sunspots. Monthly Notices of the Royal Astronomical Society 94: 39-48.
Gilbert W. (1600) De Magnete. London: P. Short.
Glatzmaier G. and Roberts P. (1995) A three-dimensional self-consistent computer simulation of a geomagnetic field reversal. Nature 337: 203-209.
Livermore, P. W., Hollerbach, R. and Finlay C. C. (2016). An accelerating high-latitude jet in Earth's core. Nature Geoscience 10: 62-68.
Parker E. N. (1955) Hydrodynamic dynamo models. Astrophysical Journal 122: 293-314.
Roberts, P. H. and Scott, S. (1965). On Analysis of the Secular Variation. 1. A Hydromagnetic Constraint: Theory. Journal of Geomagnetism and Geoelectricity, 17(2):137-151.

Don’t be a hero – unless you have to

Don’t be a hero – unless you have to

The Geodynamics 101 series serves to show the diversity of topics and methods in the geodynamics community in an understandable manner for every geodynamicist. PhD’s, postdocs, full professors, and everyone in between can introduce their field of expertise in a lighthearted, entertaining manner and touch on some of the outstanding questions and problems related to their method of choice.
This week Dr. Cedric Thieulot, assistant professor at the Mantle dynamics & theoretical geophysics group at Utrecht University in The Netherlands, discusses the advantages and disadvantages of writing your own numerical code. Do you want to talk about your research area? Contact us!

 
In December 2013, I was invited to give a talk about the ASPECT code [1] at the American Geological Union conference in San Francisco. Right after my talk, Prof. Louis Moresi took the stage and gave a talk entitled: Underworld: What we set out to do, How far did we get, What did we Learn?

The abstract went as follows:
 

Underworld was conceived as a tool for modelling 3D lithospheric deformation coupled with the underlying / surrounding mantle flow. The challenges involved were to find a method capable of representing the complicated, non-linear, history dependent rheology of the near surface as well as being able to model mantle convection, and, simultaneously, to be able to solve the numerical system efficiently. […] The elegance of the method is that it can be completely described in a couple of sentences. However, there are some limitations: it is not obvious how to retain this elegance for unstructured or adaptive meshes, arbitrary element types are not sufficiently well integrated by the simple quadrature approach, and swarms of particles representing volumes are usually an inefficient representation of surfaces.

Aside from the standard numerical modelling jargon, Louis used a term during his talk which I thought at the time had a nice ring to it: hero codes. In short, I believe he meant the codes written essentially by one or two people who at some point in time spent great effort into writing a code (usually choosing a range of applications, a geometry, a number of dimensions, a particular numerical method to solve the relevant PDEs(1), and a tracking method for the various fields of interest).

In the long list of Hero codes, one could cite (in alphabetical order) CITCOM [1], DOUAR [8], FANTOM [2], IELVIS [5], LaMEM [3], pTatin [4], SLIM3D [10], SOPALE [7], StaggYY [6], SULEC [11], Underworld [9], and I apologise to all other heroes out there whom I may have overlooked. And who does not want to be a hero? The Spiderman of geodynamics, the Superwoman of modelling?

Louis’ talk echoed my thoughts on two key choices we (computational geodynamicists) are facing: Hero or not, and if yes, what type?
 

Hero or not?

Speaking from experience, it is an intense source of satisfaction when peer-reviewed published results are obtained with the very code one has painstakingly put together over months, if not years. But is it worth it?

On the one hand, writing one own’s code is a source of deep learning, a way to ensure that one understands the tool and knows its limitations, and a way to ensure that the code has the appropriate combination of features which are necessary to answer the research question at hand. On the other hand, it is akin to a journey; a rather long term commitment; a sometimes frustrating endeavour, with no guarantee of success. Let us not deny it – many a student has started with one code only to switch to plan B sooner or later. Ultimately, this yiels a satisfactory tool with often little to no perennial survival over the 5 year mark, a scarce if at all existent documentation, and almost always not compliant with the growing trend of long term repeatability. Furthermore, the resulting code will probably bear the marks of its not-all-knowing creator in its DNA and is likely not to be optimal nor efficient by modern computational standards.

This brings me to the second choice: elegance & modularity or taylored code & raw performance? Should one develop a code in a very broad framework using as much external libraries as possible or is there still space for true heroism?

It is my opinion that the answer to this question is: both. The current form of heroism no more lies in writing one’s own FEM(2)/FDM(3) packages, meshers, or solvers from scratch, but in cleverly taking advantage of state-of-the-art packages such as for example p4est [15] for Adaptive Mesh Refinement, PetSc [13] or Trilinos [14] for solvers, Saint Germain [17] for particle tracking, deal.ii [12] or Fenics [16] for FEM, and sharing their codes through platforms such as svn, bitbucket or github.

In reality, the many different ways of approaching the development or usage of a (new) code is linked to the diversity of individual projects, but ultimately anyone who dares to touch a code (let alone write one) is a hero in his/her own right: although (super-)heroes can be awesome on their own, they often complete each other, team up and join forces for maximum efficiency. Let us all be heroes, then, and join efforts to serve Science to the best of our abilities.

Abbreviations
(1) PDE: Partial Differential Equation (2) FEM: Finite Element Method (3) FDM: Finite Difference Method

References
[1] Zhong et al., JGR 105, 2000; [2] Thieulot, PEPI 188, 2011; [3] Kaus et al., NIC Symposium proceedings, 2016; [4] May et al, CMAME 290, 2015 [5] Gerya and Yuen, PEPI 163, 2007 [6] Tackley, PEPI 171, 2008 [7] Fullsack, GJI 120, 1995 [8] Braun et al., PEPI 171, 2008 [9] http://www.underworldcode.org/ [10] Popov and Sobolev, PEPI 171, 2008 [11] http://www.geodynamics.no/buiter/sulec.html [12] Bangerth et al., J. Numer. Math., 2016; http://www.dealii.org/ [13] http://www.mcs.anl.gov/petsc/ [14] https://trilinos.org/ [15] Burstedde et al., SIAM journal on Scientific Computing, 2011; http://www.p4est.org/ [16] https://fenicsproject.org/ [17] Quenette et al., Proceedings 19th IEEE, 2007