GD
Geodynamics

Geodynamics

The Sassy Scientist – PhD angst

The Sassy Scientist – PhD angst

Every week, The Sassy Scientist answers a question on geodynamics, related topics, academic life, the universe or anything in between with a healthy dose of sarcasm. Do you have a question for The Sassy Scientist? Submit your question here.

Iris asks:


Will I ever finish my PhD?


Dear Iris,

Most researchers won’t admit to it publicly, but they all had doubts when trying to complete their PhD research. Sometimes the daunting task may seem impossible: why did I ever think I was smart enough and could graduate to become a doctor in philosophy? There are too many reasons to throw yourself into a depression: whether it is the ferocious comments on first versions of paper manuscripts, a stumbling and embarrassing presentation at a large conference in front of a room of expert strangers, deleting your work halfway through your project without a back-up, waiting for months for lab time only to find out that the one piece of equipment you needed to process your field study samples just broke down and it will take months and a new grant proposal to replace it: the list goes on and on and for some reason always keeps expanding. Before you find yourself googling the nearest psychiatrist or — even worse — decide to pack up and go home to live in your parents’ basement while working as a barista like every up-and-coming movie star ever, take comfort in this: everybody around you feels, or has felt, the same as you. Talk to your colleagues, your supervisor, your professor or (I dare you) a stranger at a conference: you’ll get positive feedback on your research and encouragement that you’ll make it. Sure, it will take effort and you will see some nights through ‘till daylight, but eventually you’ll be there. And then you’re one of the few…

Waiting for you at the other side…

Yours truly,

The Sassy Scientist

PS: This post was written after struggling to finish a PhD myself, just as every single scientist has in the past.

On the resolution of seismic tomography models and the connection to geodynamic modelling (Is blue/red the new cold/hot?) (How many pixels in an Earth??)

What do the blobs mean?

Seismologists work hard to provide the best snapshots of the Earth’s mantle. Yet tomographic models based on different approaches or using different data sets sometimes obtain quite different details. It is hard to know for a non specialist if small scale anomalies can be trusted and why. This week Maria Koroni and Daniel Bowden, both postdocs in the Seismology and Wave Physics group in ETH Zürich, tell us how these beautiful images of the Earth are obtained in practice.

Daniel Bowden and Maria Koroni enjoying coffee in Zürich

Seismology is a science that aims at providing tomographic images of the Earth’s interior, similar to X-ray images of the human body. These images can be used as snapshots of the current state of flow patterns inside the mantle. The main way we communicate, from tomographer to geodynamicist, is through publication of some tomographic image. We seismologists, however, make countless choices, approximations and assumptions, which are limited by poor data coverage, and ultimately never fit our data perfectly. These things are often overlooked, or taken for granted and poorly communicated. Inevitably, this undermines the rigour and usefulness of subsequent interpretations in terms of heat or material properties. This post will give an overview of what can worry a seismologist/tomographer. Our goal is not to teach seismic tomography, but to plant a seed that will make geodynamicists push seismologists for better accuracy, robustness, and communicated uncertainty!

A typical day in a seismologist’s life starts with downloading some data for a specific application. Then we cry while looking at waveforms that make no sense (compared to the clean and physically meaningful synthetics calculated the day before). After a sip, or two, or two thousand sips of freshly brewed coffee, and some pre-processing steps to clean up the mess that is real data, the seismologist sets up a measurement of the misfit between synthetics and observed waveforms. Do we try to simulate the entire seismogram, just its travel time, its amplitude? The choice we make in defining this misfit can non-linearly affect our outcome, and there’s no clear way to quantify that uncertainty.

After obtaining the misfit measurements, the seismologist starts thinking about best inversion practices in order to derive some model parameters. There are two more factors to consider now: how to mathematically find a solution that fits our data, and the choice of how to choose a subjectively unique solution from the many solutions of the problem… The number of (quasi-)arbitrary choices can increase dramatically in the course of the poor seismologist’s day!

The goal is to image seismic anomalies; to present a velocity model that is somehow different from the assumed background. After that, the seismologist can go home, relax and write a paper about what the model shows in geological terms. Or… More questions arise and doubts come flooding in. Are the choices I made sensible? Should I make a calculation of the errors associated with my model? Thermodynamics gives us the basic equations to translate seismic to thermal anomalies in the Earth but how can we improve the estimated velocity model for a more realistic interpretation?

What do the blobs mean?

Figure 1: A tomographic velocity model, offshore southern California. What do the blobs mean? This figure is modified from the full paper at https://doi.org/10.1002/2016JB012919

Figure 1 is one such example of a velocity model, constructed through seismic tomography (specifically from ambient-noise surface waves). The paper reviews the tectonic history of the crust and upper mantle in this offshore region. We are proud of this model, and sincerely hope it can be of use to those studying tectonics or dynamics. We are also painfully aware of the assumptions that we had to make, however. This picture could look drastically different if we had used a different amount of regularization (smoothing), had made different prior assumptions about where layers may be, had been more or less restrictive in cleaning our raw data observations, or made any number of other changes. We were careful in all these regards, and ran test after test over the course of several months to ensure the process was up to high standards, but for the most part… you just have to take our word for it.

There’s a number of features we interpret here: thinning of the crust, upwelling asthenosphere, the formation of volcanic seamounts, etc. But it wouldn’t shock me if some other study came out in the coming years that told an entirely different story; indeed that’s part of our process as scientists to continue to challenge and test hypotheses. But what if this model is used as an input to something else as-of-yet unconstrained? In this model, could the Lithosphere-Asthenosphere Boundary (LAB) shown here be 10 km higher or deeper, and why does it disappear at 200km along the profile? Couldn’t that impact geodynamicists’ work dramatically? Our field is a collaborative effort, but if we as seismologists can’t properly quantify the uncertainties in our pretty, colourful models, what kind of effect might we be having on the field of geodynamics?

Another example comes from global scale models. Taking a look at figures 6 and 7 in Meier et al. 2009, ”Global variations of temperature and water content in the mantle transition zone from higher mode surface waves” (DOI:10.1016/j.epsl.2009.03.004), you can observe global discontinuity models and you are invited to notice their differences. Some major features keep appearing in all of them, which is encouraging since it shows that we may be indeed looking at some real properties of the mantle. However, even similar methodologies have not often converged to same tomographic images. The sources of discrepancies are the usual plagues in seismic tomography, some of them mentioned on top.

410 km discontinuity

Figure 2: Global models of the 410 km discontinuity derived after 5 iterations using traveltime data. We verified that the method retrieves target models almost perfectly. Data can be well modelled in terms of discontinuity structure; but how easily can they be interpreted in terms of thermal and/or compositional variations?

In an effort to improve imaging of mantle discontinuities, especially those at 410 and 660 km depths which are highly relevant to geodynamics (I’ve been told…), we have put some effort into building up a different approach. Usually, traveltime tomography and one-step interpretation of body wave traveltimes have been the default for producing images of mantle transition zone. We proposed an iterative optimisation of a pre-existing model, that includes flat discontinuities, using traveltimes in a full-waveform inversion scheme (see figure 2). The goal was to see whether we can get the topography of the discontinuities out using the new approach. This method seems to perform very well and it gives the potential for higher resolution imaging. Are my models capable of resolving mineralogical transitions and thermal variations along the depths of 410 and 660 km?

The most desired outcome would be not only a model that represents Earth parameters realistically but also one that provides error bars, which essentially quantify uncertainties. Providing error bars, however, requires extra computational work, and as every pixel-obsessed seismologist, we would be curious to know the extent to which uncertainties are useful to a numerical modeller! Our main question, then, remains: how can we build an interdisciplinary approach that can justify large amounts of burnt computational power?

As (computational) seismologists we pose questions for our regional or global models: Are velocity anomalies good enough, intuitively coloured as blue and red blobs and representative of heat and mass transfer in the Earth, or is it essential that we determine their shapes and sizes with greater detail? Determining a range of values for the derived seismic parameters (instead of a single estimation) could allow geodynamicists to take into account different scenarios of complex thermal and compositional patterns. We hope that this short article gave some insight into the questions a seismologist faces each time they derive a tomographic model. The resolution of seismic models is always a point of vigorous discussions but it could also be a great platform for interaction between seismologists and geodynamicists, so let’s do it!

For an overview of tomographic methodologies the reader is referred to Q. Liu & Y. J. Gu, Seismic imaging: From classical to adjoint tomography, 2012, Tectonophysics. https://doi.org/10.1016/j.tecto.2012.07.006

The Sassy Scientist – Analogue Modelling

The Sassy Scientist – Analogue Modelling

Every week, The Sassy Scientist answers a question on geodynamics, related topics, academic life, the universe or anything in between with a healthy dose of sarcasm. Do you have a question for The Sassy Scientist? Submit your question here.

David asks:


What do you think about analogue modelling?


Dear David,

Analogue modelling. Well, what’s not to like? Who doesn’t want to spend weeks or months finding the material that mimics behaviour we suspect to be relevant for Earth-like processes? And then, after finally finding the perfect material and sculpting a subduction zone, seeing it all sink to the bottom of the tank because the ‘lithosphere’ wasn’t placed perfectly on top of the ‘asthenosphere’?

Then again, the realm of analogue modelling isn’t all that grim… Even though you cannot blindly run many models to investigate the full parameter space, this is also a benefit. Analogue modelling requires you to make smart choices about the processes you seek to model. Then, the results are fairly close to the first-order response we consider appropriate for Earth. With numerical models we can simply add complexity on top of complexity on top of complexity, which makes it fairly difficult to constrain exactly what’s happening. Additionally, numerical models may produce exciting figures and results that seem to mimic what we interpret from our observations. In the end they simply numerically solve some equations. In analogue models on the other hand you actually see nature at work!

To conclude: analogue modelling is definitely worth the pain and effort (see João’s story). Unfortunately, research positions are limited, because it simply isn’t as sexy as numerical modelling. There are limited facilities worldwide, whereas for numerically modelling every university can provide you with a computer. So: get into it before the state of funding for analogue modelling becomes as comforting as the Dry Valleys of Antarctica!

Yours truly,

The Sassy Scientist

PS: This post was written after sitting through a disappointing analogue modelling session at EGU

The past is the key

The past is the key

Lorenzo Colli

“The present is the key to the past” is a oft-used phrase in the context of understanding our planet’s complex evolution. But this perspective can also be flipped, reflected, and reframed. In this Geodynamics 101 post, Lorenzo Colli, Research Assistant Professor at the University of Houston, USA, showcases some of the recent advances in modelling mantle convection.  

 

Mantle convection is the fundamental process that drives a large part of the geologic activity at the Earth’s surface. Indeed, mantle convection can be framed as a dynamical theory that complements and expands the kinematic theory of plate tectonics: on the one hand it aims to describe and quantify the forces that cause tectonic processes; on the other, it provides an explanation for features – such as hotspot volcanism, chains of seamounts, large igneous provinces and anomalous non-isostatic topography – that aren’t accounted for by plate tectonics.

Mantle convection is both very simple and very complicated. In its essence, it is simply thermal convection: hot (and lighter) material goes up, cold (and denser) material goes down. We can describe thermal convection using classical equations of fluid dynamics, which are based on well-founded physical principles: the continuity equation enforces conservation of mass; the Navier-Stokes equation deals with conservation of momentum; and the heat equation embodies conservation of energy. Moreover, given the extremely large viscosity of the Earth’s mantle and the low rates of deformation, inertia and turbulence are utterly negligible and the Navier-Stokes equation can be simplified accordingly. One incredible consequence is that the flow field only depends on an instantaneous force balance, not on its past states, and it is thus time reversible. And when I say incredible, I really mean it: it looks like a magic trick. Check it out yourself.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk

This is as simple as it gets, in the sense that from here onward every additional aspect of mantle convection results in a more complex system: 3D variations in rheology and composition; phase transitions, melting and, more generally, the thermodynamics of mantle minerals; the feedbacks between deep Earth dynamics and surface processes. Each of these additional aspects results in a system that is harder and costlier to solve numerically, so much so that numerical models need to compromise, including some but excluding others, or giving up dimensionality, domain size or the ability to advance in time. More importantly, most of these aspects are so-called subgrid-scale processes: they deal with the macroscopic effect of some microscopic process that cannot be modelled at the same scale as the macroscopic flow and is too costly to model at the appropriate scale. Consequently, it needs to be parametrized. To make matters worse, some of these microscopic processes are not understood sufficiently well to begin with: the parametrizations are not formally derived from first-principle physics but are long-range extrapolations of semi-empirical laws. The end result is that it is possible to generate more complex – thus, in this regard, more Earth-like – models of mantle convection at the cost of an increase in tunable parameters. But what parameters give a truly better model? How can we test it?

Figure 1: The mantle convection model on the left runs in ten minutes on your laptop. It is not the Earth. The one on the right takes two days on a supercomputer. It is fancier, but it is still not the real Earth.

Meteorologists face similar issues with their models of atmospheric circulation. For example, processes related to turbulence, clouds and rainfall need to be parametrized. Early weather forecast models were… less than ideal. But meteorologists can compare every day their model predictions with what actually occurs, thus objectively and quantitatively assessing what works and what doesn’t. As a result, during the last 40 years weather predictions have improved steadily (Bauer et al., 2015). Current models are better at using available information (what is technically called data assimilation; more on this later) and have parametrizations that better represent the physics of the underlying processes.

If time travel is possible, where are the geophysicists from the future?

We could do the same, in theory. We can initialize a mantle convection model with some best estimate for the present-day state of the Earth’s mantle and let it run forward into the future, with the explicit aim of forecasting its future evolution. But mantle convection evolves over millions of years instead of days, thus making future predictions impractical. Another option would be to initialize a mantle convection model in the distant past and run it forward, thus making predictions-in-the-past. But in this case we really don’t know the state of the mantle in the past. And as mantle convection is a chaotic process, even a small error in the initial condition quickly grows into a completely different model trajectory (Bello et al., 2014). One can mitigate this chaotic divergence by using data assimilation and imposing surface velocities as reconstructed by a kinematic model of past plate motions (Bunge et al., 1998), which indeed tends to bring the modelled evolution closer to the true one (Colli et al., 2015). But it would take hundreds of millions of years of error-free plate motions to eliminate the influence of the unknown initial condition.

As I mentioned before, the flow field is time reversible, so one can try to start from the present-day state and integrate the governing equations backward in time. But while the flow field is time reversible, the temperature field is not. Heat diffusion is physically irreversible and mathematically unstable when solved back in time. Plainly said, the temperature field blows up. Heat diffusion needs to be turned off [1], thus keeping only heat advection. This approach, aptly called backward advection (Steinberger and O’Connell, 1997), is limited to only a few tens of millions of years in the past (Conrad and Gurnis, 2003; Moucha and Forte, 2011): the errors induced by neglecting heat diffusion add up and the recovered “initial condition”, when integrated forward in time (or should I say, back to the future), doesn’t land back at the desired present-day state, following instead a divergent trajectory.

Per aspera ad astra

As all the simple approaches turn out to be either unfeasible or unsatisfactory, we need to turn our attention to more sophisticated ones. One option is to be more clever about data assimilation, for example using a Kalman filter (Bocher et al., 2016; 2018). This methodology allow for the combining of the physics of the system, as embodied by the numerical model, with observational data, while at the same time taking into account their relative uncertainties. A different approach is given by posing a formal inverse problem aimed at finding the “optimal” initial condition that evolves into the known (best-estimate) present-day state of the mantle. This inverse problem can be solved using the adjoint method (Bunge et al., 2003; Liu and Gurnis, 2008), a rather elegant mathematical technique that exploits the physics of the system to compute the sensitivity of the final condition to variations in the initial condition. Both methodologies are computationally very expensive. Like, many millions of CPU-hours expensive. But they allow for explicit predictions of the past history of mantle flow (Spasojevic & Gurnis, 2012; Colli et al., 2018), which can then be compared with evidence of past flow states as preserved by the geologic record, for example in the form of regional- and continental-scale unconformities (Friedrich et al., 2018) and planation surfaces (Guillocheau et al., 2018). The past history of the Earth thus holds the key to significantly advance our understanding of mantle dynamics by allowing us to test and improve our models of mantle convection.

Figure 2: A schematic illustration of a reconstruction of past mantle flow obtained via the adjoint method. Symbols represent model states at discrete times. They are connected by lines representing model evolution over time. The procedure starts from a first guess of the state of the mantle in the distant past (orange circle). When evolved in time (red triangles) it will not reproduce the present-day state of the real Earth (purple cross). The adjoint method tells you in which direction the initial condition needs to be shifted in order to move the modeled present-day state closer to the real Earth. By iteratively correcting the first guess an optimized evolution (green stars) can be obtained, which matches the present-day state of the Earth.

1.Or even to be reversed in sign, to make the time-reversed heat equation unconditionally stable.