GD
Geodynamics
Grace Shephard

Guest

Grace is a postdoctoral researcher at the Centre for Earth Evolution and Dynamics (CEED) at the University of Oslo, Norway. She works on linking plate tectonic reconstructions and mantle structure, especially in the Arctic and Pacific regions. Grace is part of the GD blog team as an Editor. You can reach Grace via email.

Finding the forces in continental rifting

Finding the forces in continental rifting

Luke Mondy

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. For our latest ‘Geodynamics 101’ post, PhD candidate Luke Mondy from the EarthByte Group at the University of Sydney blogs about some impressively high-resolution numerical models of ‘rotational rifting,’ and the role of gravity. Luke also shares a bit about the journey behind this work, which recently appeared in Geology.

 

In geodynamic modelling, we’re always thinking about forces. It’s a balancing act of plate driving forces potentially interacting with the upwelling mantle, or maybe sediment loading, or thermal relaxation… the list goes on.

Figure 1: A summary of the forces interacting during continental rifting, from Brune, 2018.

But the thing that underpins all of these forces, fundamentally, is our favourite but oft forgotten force: gravity. Here, I’ll tell the story of investigating a numerical model of continental rifting and discovering – or rather, rediscovering – the importance of gravity as a fundamental force in driving Earth dynamics.

How it started – a side project!

A few years ago, my colleagues and I were granted access to not just one, but two, big supercomputers in Australia: Raijin, and Magnus. Both were brand new and raring to go – but we needed something big to test them out on. At the time, 3D geodynamic models were typically limited to quite low resolution, since they can be so computationally demanding, but since we had access to this new power, we decided to see how far we could push the computers to address a fundamentally 3D problem.

2D vs 3D

Historically, subduction and rifting have been ideal settings to model as they can be constrained to two dimensions while still retaining most of their characteristic properties.

Figure 2. A 2D subduction model. Despite being ‘only’ two dimensions, the fundamental and interesting aspects of the problem are still captured by the model. Figure from Rey et al., 2014.

However, as tremendously useful as these models have been, many interesting problems in geodynamics are fundamentally three dimensional. The obvious example is global mantle convection, but we are starting to see more and more papers addressing both rifting and subduction problems that require 3D contexts, for example: continental accretion (Moresi et al., 2014), metamorphic core complex formation (Rey et al., 2017), or oblique rifting (Brune et al., 2012).

Typically when we model a rift in 2D, the dimensionality implies that we are looking at orthogonal rifting – that the plates move away from each other perpendicular to the rift axis. Since 2D models cannot account for forces in the third dimension, they are only suitable for when the applied tectonic forces pull within the plane of the model – that is, when the 2D model lies along a small circle of an Euler pole.

Euler poles have another interesting geometric property – the velocity of extension between two plates changes as we move closer or further away from the Euler pole: zero velocity at the pole itself, and fastest at the equator to the pole (Lundin et al., 2014).

Figure 3. Left: From Lundin et. al. (2014), the figure shows the geometric relationship of increasing rifting velocity as the distance from the pole increases. Right: the same relationship graphed out, showing the cosine curve (Kearey et.al. 2009).

This leads to differing extension velocities along the length of the rift axis. Extension velocities are a huge control on the resulting geodynamics (e.g., Buck et al., 1999). Employing a series of 2D models along a rift axis (Brune et al., 2014) has been used to show how these dynamics change, but misses out on the three-dimensionality of the problem – how do these differing and diachronous dynamics interact with each along other the rift margin as it forms?

Rotational Rifting

We decided to attempt to model this sort of rifting, as we termed it “rotational rifting”. Essentially we linked up the 2D slices along the rift axis into one big 3D model – so that we have slow extension towards the Euler pole, and fast extension away from it.

To do this, we ended up using the code Underworld (at the time version 1.8 – but their 2.0 version is the best place to start!), and a framework developed inside the EarthByte group at the University of Sydney called the ‘Lithospheric Modelling Recipe’, or LMR.

 

Figure 4. Map view of the two experiments. Arrows show the velocity boundary conditions applied. Note they are perpendicular to the model domain – we thought long and hard about this choice, and explain it fully in the Data Repository.

Using the LMR, we set up two 3D experiments: both are 1000 km by 500 km along the surface, and 180 km deep. The ‘orthogonal’ experiment is modelled at the equator to the pole – so the velocities along the walls are the same all the way along the rift axis. The ‘rotational’ experiment is very close to the Euler pole (where the rate of extension velocity change is greatest), from 89 degrees to 79 degrees (90 degrees being the Euler pole), which gives an imposed velocity at the slow end (89 degrees) of 0.5 cm/yr and at the fast end (79 degrees) 5.0 cm/yr.

 

Since we wanted to stress test the supercomputers, we ran these experiments at just under 2 km grid resolution (256 x 512 x 96). This meant each experiment ended up using about 2.5 billion particles to track the materials! The 2 km grid size is an important milestone – to properly resolve faulting, sub-2 km grid sizes are required (Gerya, 2009).

The results!

So we ran the experiments, and compared the results! To give a broad overview of what we found, here’s a nice animation:

Figure 5. Top: Animation showing the orthogonal experiment from a south-west perspective (with the Euler pole being the ‘north’ pole). The light grey layers show the upper crust, dark grey the lower crust. Half of the crust has been removed to show the lithospheric mantle topography. The blue to the white colours show the lithospheric mantle temperature, and from white to red shows the asthenospheric temperature. Bottom: As above but for the rotational experiment. Notice that the asthenospheric dome migrates along the rift towards the Euler pole.

What to do now?

Cool looking experiments, of course! The supercomputers had been able to handle the serious load we put on them (it took about 2 weeks per experiment, on ~800 cpus), so that part of the project was a success. But what about the experiments themselves – did switching to 3D actually tell us anything useful?

What we expected…

The things we expected were there. The orthogonal experiment behaved identically to a 2D model. For the rotational experiment, we found the style of faulting changed and evolved along the rift axis, and seemed to match up nicely with the 2D work about differing extension rates. We were able to identify phases of rifting via strain patterns, which were similar to those described by Lavier and Manatschal (2006), and seemed to match the outputs of the series of 2D models along a rift axis.

Figure 6. Map view of strain-rate of the rotational experiment through time. The phases (1 through 4, representing different modes of deformation) migrate along the rift towards the Euler pole.

What we didn’t expect…

Almost on a whim, we decided to start looking into the tectonic regime. Using the visualization program Paraview, we calculated the eigenvectors of the deviatoric stress and assigned a tectonic regime (blue for extension, red for compression, green for strike-slip, and white for undetermined), following a similar scheme to the World stress map (Zoback, 1992). Apologies to colour blind folk!

Here’s what a selected section of the orthogonal experiment surface looks like through time:

Figure 7. The stress regimes at the surface of the orthogonal experiment (clipped to y = ~400 to ~600 km).

Not really that surprising – we found mostly extension everywhere, with a bit of compression when the central graben sinks down and gets squeezed. However, it was a little bit surprising to see the compression come back on the rift flanks.

But when we applied the same technique to the rotational experiment, we found this on the surface:

Figure 8. The stress regimes at the surface of the rotational experiment. The three numbers at the top represent the total extension at y = 0 km, 500 km, and 1000 km respectively.

Now all of a sudden we’re seeing strike-slip stress regimes in different areas of the experiment!

The above figures displaying the stress in the experiments so far have been of the surface – where one of the principal stress axes must be vertical – but our colouring technique does not limit us to just the surface. We noticed when looking at cross-sections that the lithospheric mantle was also showing unexpected stress regimes!

Figure 9. Slices at y = 500 km across the rift axis (right in the middle). Coloured areas show where the plunge of one principal stress axis is >60 degrees. Both experiments have the same applied extension velocity at y = 500 km, and so total extension is equivalent between experiments.

In most of the lithosphere, the strain rate is still very small, not enough to notice much deformation (1e-16 to 1e-18 1/s). But a few puzzling questions were raised: why do we see compressional tectonic regimes in the orthogonal experiment; and why do we also see strike-slip regimes in the rotational experiment?

Gravitational Potential Energy (GPE)

It quickly became apparent that these stress changes were related to the upwelling asthenosphere, as the switch between regimes was well timed to when the asthenosphere would approach the Moho – about 40 km depth. This gave us the hint that perhaps buoyancy forces were at play. We used Paraview again to calculate the gravitational potential energy at each point on the surface (taking into account all the temperature dependent densities, detailed topography, and so on), and produced these maps:

Figure 10. A time series showing the gravitational potential energy (GPE) at each point on the surface of the rotational experiment. Only half the surface is shown because it is symmetrical. The small triangle notch is where we determined the rift tip to be located (where 1/(beta factor) < 0.2).

What we saw confirmed our suspicions – the rise of the asthenospheric dome induces a gravitational force that radiates outwards. The juxtaposition of the hot, yet still quite heavy, asthenospheric material, next to practically unthinned crust on both the rift flanks and ahead of the rift tip, produces a significant force.

But why the switch to compression or strike-slip tectonic regimes in an otherwise extensional setting? In the case of the orthogonal model, the force (aka the difference in GPE) is perpendicular to the rift axis, since the dome rises synchronously along the axis. When this force overcomes the far-field tectonic force (essentially the force required to drive our experiment boundary conditions), the stress regime changes from extension to compression.

However, in the rotational experiment, the dome is larger the further away from the Euler pole, and so instead the gravitational force radiates outwards from the dome. Now the stress in the lithospheric mantle has to deal with not only the force induced from the upwelling asthenosphere right next to it, but also from along the rift axis (have a look at the topography of the lithospheric mantle in Fig. 5). These combined forces end up rotating the principal stresses such that sigma_2 stands vertical and a strike-slip regime is generated.

We also see the gravitational force manifest in other ways. Looking at the along axis flow in the asthenosphere, the experiment initially predicts a suction force towards the rapidly opening end of the model (away from the Euler pole), similar to Koopmann et al. (2014). But once the dome is formed, we see a reversal of this flow, back towards the Euler pole, driven by gravitational collapse. This flow appears to apply a strong stress to the crust surrounding the dome, reaching upwards of 50 MPa in some places.

Figure 11. A: The direction of flow at the lithosphere-asthenosphere boundary in the centre of the rift. Early in the experiment, we see suction towards the fast end of the rift, while later in the experiment, we see a return flow. The dashed line shows the flow after the tectonic boundary conditions have been removed. B,C: cross-sections showing stress and velocity arrows from the experiment just after the tectonic boundary conditions have been removed.

How do we know it’s gravity?

To test this idea further, we ran some additional experiments. First, we let the rotational experiment run for about 3.6 Million years, and then ‘stopped’ the tectonics (changed the side velocity boundary conditions to 0 cm/yr) – leaving gravity as the only driving force. We saw that the return flow towards the Euler pole was still present (though reduced). By running some more rotational experiments with either doubled or halved Euler pole rotational rate, we saw that the initial suction magnitude correlates with the change in opening velocity, but the return flow to the Euler pole is almost identical, giving further evidence that this is gravity driven.

What about the real world?

We numerical modellers love to stay in the world of numbers – but alas sometime we must get our hands dirty and look at the real world – just to make sure our models actually tell us something useful!

Despite our slightly backwards methodology (model first, check nature second), it did give us an advantage: our experiments were producing predictions for us to go and test. We had our hypothesis – now to see if it could be validated.

So we went out and looked for examples of rifting near an Euler pole, and the two most notable we found were in the Woodlark Basin, Papua New Guinea, and the Galapagos Rise in the Pacific. Despite the ‘complications’ of the natural world (things like sediment loading, pre-existing weakness in the crust, etc. – things that get your hands dirty), we found a striking first order relationship between the earthquake focal mechanisms present in both areas, and what our experiments predicted:

Figure 12. Top: the Woodlark Basin, PNG. Bottom: the Galapagos Rise. Both show earthquake focal mechanisms, coloured the same way as our experiments: blue for extension, red for compression, and green for strike-slip.

Furthermore, much work has been done investigating the Hess deep, a depression that sits ahead of the rift tip in the Galapagos. We found in our rotational experiment a similar ‘deep’ that moves ahead of the rift tip through time, giving us greater confidence in our experimental predictions.

Takeaways

There are a few things I’ve taken away from this experience. The first is that it’s important to remember the fundamentals. I’ve found that, generally, geodynamicists initially think about the force-balances going on in a particular setting, but gravity was staring me in the face for a while before I understood its critical role.

The second take-away was that exploratory modelling – playing around with experiments just for fun – is a great thing to do. Probably most of us do this anyway as part of the day-to-day activities, but putting aside some time to think about what sort of things to try out allowed us to find something really interesting. Furthermore, we then had a whole host of predictions we could go out and look for, rather than trying to tweak out experiment parameters to match something we already had found.

Finally, the 3D revolution we’re going through at the moment is exciting! Now that there are computers available to us that are able to run these enormous calculations, it gives us a chance to explore these fundamental problems in a new way and hopefully learn something about the world!

If you would like to checkout our paper, you can see it here. We made all of our input files open-source (and the code Underworld is already open-source), so please check them out too!

References

Brune, S., Popov, A. A., & Sobolev, S. V. (2012). Modeling suggests that oblique extension facilitates rifting and continental break‐up. Journal of Geophysical Research: Solid Earth, 117(B8).

Brune, S., Heine, C., Pérez-Gussinyé, M., & Sobolev, S. V. (2014). Rift migration explains continental margin asymmetry and crustal hyper-extension. Nature Communications, 5, 4014.

Brune, S. (2018). Forces within continental and oceanic rifts: Numerical modeling elucidates the impact of asthenospheric flow on surface stress. Geology, 46(2), 191-192.

Buck, W. R., Lavier, L. L., & Poliakov, A. N. (1999). How to make a rift wide. Philosophical Transactions - Royal Society of London Series A, Mathematical Physical and Engineering Sciences, 671-689.

Gerya, T. (2009). Introduction to numerical geodynamic modelling. Cambridge University Press.

Kearey, P. (Ed.). (2009). The Encyclopedia of the solid earth sciences. John Wiley & Sons.

Lundin, E. R., Redfield, T. F., Péron-Pindivic, G., & Pindell, J. (2014, January). Rifted continental margins: geometric influence on crustal architecture and melting. In Sedimentary Basins: Origin, Depositional Histories, and Petroleum Systems. 33rd Annual GCSSEPM Foundation Bob F. Perkins Conference. Gulf Coast Section SEPM (GCSSEPM), Houston, TX (pp. 18-53).

Koopmann, H., Brune, S., Franke, D., & Breuer, S. (2014). Linking rift propagation barriers to excess magmatism at volcanic rifted margins. Geology, 42(12), 1071-1074.

Lavier, L. L., & Manatschal, G. (2006). A mechanism to thin the continental lithosphere at magma-poor margins. Nature, 440(7082), 324.

Mondy, L. S., Rey, P. F., Duclaux, G., & Moresi, L. (2018). The role of asthenospheric flow during rift propagation and breakup. Geology.

Moresi, L., Betts, P. G., Miller, M. S., & Cayley, R. A. (2014). Dynamics of continental accretion. Nature, 508(7495), 245.

Rey, P. F., Coltice, N., & Flament, N. (2014). Spreading continents kick-started plate tectonics. Nature, 513(7518), 405.

Rey, P. F., Mondy, L., Duclaux, G., Teyssier, C., Whitney, D. L., Bocher, M., & Prigent, C. (2017). The origin of contractional structures in extensional gneiss domes. Geology, 45(3), 263-266.

Zoback, M. L. (1992). First‐and second‐order patterns of stress in the lithosphere: The World Stress Map Project. Journal of Geophysical Research: Solid Earth, 97(B8), 11703-11728.

Rheological Laws: Atoms on the Move

Rheological Laws: Atoms on the Move

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. For our first ‘101’ for 2018, we have an entry by postdoctoral researcher Elvira Mulyukova from Yale University about rheology and deformation occurring on atomic scales … it’s a fun and informative read indeed! Do you want to talk about your research? Contact us!

Elvira Mulyukova, Yale University

Most of us have an intuitive understanding that different materials resist being moved, or deformed, to different degrees. Splashing around in the mud is more energy-consuming (and fun, but never mind that) than in the water, and splashing around in the block of concrete is energy-intensive bordering on deadly. What are the physical reasons for these differences?

For Earth materials (rocks), the answer lies in the restless nature of their atoms: the little buggers constantly try to sneak out of their crystal lattice sites and relocate. Some are more successful at it than others, making those materials more easily deformable. A lattice site is really just other atoms surrounding the one that is trying to escape. You see, atoms are like a bunch of introverts: each is trying to escape from its neighbours, but doesn’t want to get near them. The ones that do escape have to overcome a temporary discomfort (or an increase in their potential energy, for those physically inclined) of getting close to their neighbours. This requires energy. The closer the neighbours – the more energy it takes to get past them. When you exert force on a material, you force some of the neighbours to be further away from our potential atomic fugitive, making it more likely for the atom to sneak in the direction of those neighbours. The fun part (well, fun for nerds like me) is that it doesn’t happen to just one atom, but to a whole bunch of them, wherever the stress field induced by the applied force is felt. A bunch of atoms escaping in some preferential direction is what we observe as material deformation. The more energy you need to supply to induce the mass migration of atoms – the stronger the material. But it’s really a question of how much energy the atom has to begin with, and how much energy is overall needed to barge through its detested neighbuors. For example, when you crank up the temperature, atoms wiggle more energetically and don’t need as much energy supplied from external forcing in order to escape – thus the material gets weaker.

“A lattice site is really just other atoms surrounding the one that is trying to escape. You see, atoms are like a bunch of introverts: each is trying to escape from its neighbours, but doesn’t want to get near them.” cartoon by Elvira Mulyukova

One more thing. Where are the atoms escaping to? Well, there happen to be sanctuaries within the crystal lattice – namely, crystalline defects such as vacancies (aka point defects, where an atom is missing from the otherwise ordered lattice), dislocations (where a whole row of atoms is missing), grain boundaries (where one crystal lattice borders another, which is tilted relative to it) and other crystalline imperfections. These regions are sanctuaries because the lattice is more disordered there, which allows for larger distances in-between the neighbors. When occupying a regular lattice site – the atom is sort of trapped by the crystalline order. Think of the lattice as an oppressive regime, and the crystalline defects as liberal countries that are welcoming refugees. I don’t know, is this not the place for political metaphors? *…whistling and looking away…*

Ok, enough anthropomorphisms, let’s get to the physics. If this is the last sentence you’ll read in this blog entry, let it be this: rocks are made up of atoms that are arranged into crystal lattices (i.e., ordered rows and columns of atoms), which are further organized into crystal grains (adjacent crystals tilted relative to each other); applying force to a material encourages atoms to move in a preferential direction of the largest atomic spacing, as determined by the direction of the applied force; the ability of the lattice sites to keep their atoms in place (call it a potential energy barrier) determines how easily a material deforms. Ok, so it was more like three sentences, but now you know why we need to get to the atomic intricacies of the matter to understand materials macroscopic behaviour.

Alright, so we’re applying a force (or stress, which is simply force per area) to a material and watch it deform (a zen-inducing activity in its own right). We say that a material behaves like a fluid when its response to the applied stress (and not just any stress, but differential stress) is to acquire a strain rate (i.e., to progressively shorten or elongate in one direction or the other at some rate). On geological time scales, rocks behave like fluids, and their continuous deformation (mass migration of atoms within a crystal lattice) under stress is called creep.

The resistance to deformation is termed viscosity (let’s denote it µ), which basically tells you how much strain rate (˙e) you get for a given applied differential stress (τ). Buckle up, here comes the math. For a given dimension (say x, and for the record – I’ll only be dealing with one dimension here to keep the math symbols simple, but bear in mind that µ, ˙e and τ are all tensors, so you’d normally either have a separate set of equations for each dimension, or some cleverly indexed symbols in a single set of equations), you have:

So if I’m holding a chunk of peridotite with a viscosity of 1020 Pa s (that’s units of Pascal-seconds, and that’s a typical upper mantle viscosity) and squeezing it in horizontal direction with a stress of 108 Pa (typical tectonic stress), it’ll shorten at a rate of 5 · 10-13 s-1 (typical tectonic rates). A lower viscosity would give me a higher strain rate, or, equivalently, with a lower viscosity I could obtain the same strain rate by applying a smaller stress. If at this point you’re not thinking “oh, cool, so what determines the viscosity then?,” I failed massively at motivating the subject of this blog entry. So I’m just gonna go ahead and assume that you are thinking that. Right, so what controls the viscosity? We already mentioned temperature (let’s call it T), and this one is a beast of an effect. Viscosity depends on temperature exponentially, which is another way of saying that viscosity depends on temperature hellavulot. To throw more math at you, here is what this dependence looks like:

where R = 8.3144598 J/K/mol (that’s Joule per Kelvin per mol) is the gas constant and E is the activation energy. Activation energy is the amount of energy that an atom needs to have in order to even start thinking about escaping from its lattice site, which of course depends on the potential energy barrier set up by its neighbours. Let’s say your activation energy is E = 530·103 J mol-1 . If I raised your temperature from 900 to 1000 K (that’s Kelvin, and those are typical mid-lithospheric temperatures), your viscosity would drop by a factor of ∼ 1000. That’s a three orders of magnitude drop.

Like I said, helluvalot. If instead you had a lower activation energy, say E = 300 · 103 J mol-1 , the same temperature experiment would bring your viscosity down by a factor of ∼ 50, which is less dramatic, but still significant. It’s like running through peanut butter versus running through chocolate syrup (running through peanut butter is a little harder… I clearly need to work on my intuition-enhancing examples). Notice, however, that while the temperature dependence is stronger for materials with higher activation energies, it is more energy-consuming to get the creep going in those materials in the first place, since atoms have to overcome higher energy barriers. There’s more to the story. Viscosity also depends on pressure (call it P), which has a say in both the energy barrier the atoms have to overcome in order to escape their neighbours, as well as how many lattice defects (called sanctuaries earlier) the atoms have available to escape to. The higher the pressure, the higher the energy barrier and the fewer lattice sanctuaries to resort to, thus the higher the viscosity. Throwing in the pressure effect, viscosity goes as:

The exact dependence of viscosity on pressure is determined by V – the activation volume.

Alright, we’re finally getting to my favourite part – the atoms’ choice of sanctuary sites. If the atomic mass migration happens mainly via point defects, i.e., by atoms hopping from one single lattice vacancy to another, the deformation regime is called diffusion creep. As atoms hop away, vacancies accumulate in regions of compressive stress, and fewer vacancies remain in regions of tensional stress. Such redistribution of vacancies can come about by atoms migrating through the bulk of a crystal (i.e., the interior of a grain, which is really just a crystal that is tilted relative to its surrounding crystals), or atoms migrating along the boundary of a crystal (i.e., a grain boundary). In both cases, the rate at which atoms and vacancies get redistributed depends on grain size (let’s denote it r). The larger the grains – the more distance an atom has to cover to get from the part of the grain that is being compressed to the part that is under tension. More math is due. Here is what the viscosity of a material deforming by diffusion creep looks like:

Exponent m depends on whether the atoms are barging through the bulk of a grain (m = 2), or along the grain boundaries (m = 3). What’s that new symbol B in the denominator, you ask? That’s creep compliance (in this case – diffusion creep compliance), and you two have already met, sort of. Creep compliance specifies how a given creep mechanism depends on pressure and temperature:

For diffusion creep of upper mantle rocks, I typically use m = 3, B0 ∼ 13 µmm MPa-1 s-1 (which is just a material-specific prefactor), Ediff = 300 · 103 J mol-1 and Vdiff = 5 cm3 mol-1 from Karato and Wu (1993). Sometimes I go bananas and set Vdiff = 0 cm3 mol-1 , blatantly ignoring pressure dependence of viscosity, which is ok as long as I’m looking at relatively modest depth-ranges, like a few tens of kilometers.

At sufficiently high stresses, a whole row of atoms can become mobilized and move through the crystal, instead of the meagre one-by-one atomic hopping between the vacancies. This mode of deformation is called dislocation creep. Dislocations are really just a larger scale glitch in the structure of atoms (compared to vacancies). They are linear lattice defects, where a whole row of atoms can be out of order, displaced or missing. It requires more energy to displace a dislocation, because you are displacing more than one atom, but once it’s on the move, it accommodates strain much more efficiently than in the each-atom-for-itself diffusion creep regime. As the material creeps, dislocations get born (nucleated), get displaced and get dead (annihilated). Dislocations don’t care about grain size. What they do care about is stress. Stress determines the rate at which dislocations appear, move and disappear. I know you saw it coming, more math, here is the dislocation creep viscosity:

Exponent n dictates the stress dependence of viscosity. Stress dependence of dislocation creep viscosity is a real pain, making the whole thing nonlinear and difficult to use in a geodynamical model. Not impossible, but rage-inducingly difficult. Say you’re trying to increase the strain rate by some amount, so you increase the stress, but then the viscosity drops, and suddenly you have a monster of a strain rate you never asked for. Ok, maybe it’s not quite this bad, but it’s not as good as if the viscosity just stayed constant. You wouldn’t be able to have strain localization, form tectonic plate boundaries and develop life on Earth then, but maaaan would you be cracking geodynamic problems like they were peanuts! I’m derailing. Just like all the other creeps, dislocation creep has its own compliance, A, that governs its dependence on pressure and temperature:

For dislocation creep of upper mantle rocks, I typically use n = 3, A0 = 1.1 · 105 MPa-n s -1 (which is just a material-specific prefactor), Edisl = 530 · 103 J mol -1 and Vdiff = 20 cm3 mol-1 , similar to Karato and Wu (1993). Just like for diffusion creep, I sometimes just set Vdisl = 0 cm3 mol-1 to keep things simple.

We’re almost done. Allow me one last remark. A rock has an insane amount of atoms, crystal grains and defects, all subject to local and far-field conditions (stress, temperature, pressure, deformation history, etc). A typical rock is therefore heterogeneous on the atomic (nano), granular (micro) and outcrop (meter) scales. Thus, within one and the same rock, deformation will likely be accommodated by more than just one mechanism. With that in mind, and sticking to just two deformation mechanisms described above, we can mix it all together to get:

This is known as composite rheology, where we assumed that the strain rates accommodated by diffusion and dislocation creep can be simply summed up, like so:

Alright. If you got down to here, I salute you! Next time you’re squeezing a peridotite, or splashing in the mud, or running through peanut butter – give a shout out to those little atoms that enable you to do such madness. And if you want to get to the physics of it all, you can find some good introductory texts in Karato (2008); Turcotte and Schubert (2002).

References 

S. Karato. Deformation of earth materials: an introduction to the rheology of solid earth. Cambridge Univ Pr, 2008. 

S. Karato and P. Wu. Rheology of the upper mantle: a synthesis. Science, 260(5109):771–778, 1993. 

D.L. Turcotte and G. Schubert. Geodynamics. Cambridge Univ Pr, 2002.

From hot to cold – 7 peculiar planets around the star TRAPPIST-1

From hot to cold – 7 peculiar planets around the star TRAPPIST-1

Apart from Earth, there are a lot of Peculiar Planets out there! Every 8 weeks, give or take, we look at a planetary body or system worthy of our geodynamic attention. When the discovery of additional Earth-sized planets within the TRAPPIST-1 system was revealed last year, bringing the total to 7 planets, it captured the minds of audiences far and wide. This week, two of the authors from a 2017 Nature Astronomy study on the TRAPPIST-1 planets, Lena Noack from the Department of Earth Sciences at the Free University of Berlin and Kristina Kislyakova from the Department of Astrophysics at the University of Vienna, explain more about this fascinating system. 

Blog authors Lena Noack and Kristina Kislyakova

For Earth scientists, it often seems like a huge endeavour to talk about the geodynamics and other interior processes of the other planets in our Solar System like Mars or Venus. But what about exoplanets? It’s very daring! We have almost no information about the thousands of planets that have been discovered so far in other places of our galaxy. These planets orbit other stars, of which some are quite similar to our Sun whereas other stars behave very differently. But how much do we actually know about planets around these stars?

Exoplanet hunting missions like Kepler have shown that the majority of exoplanets are actually small-mass planets – not huge gas giants like Jupiter – and are often smaller than Neptune, with some being even smaller than Earth. We have a pretty good idea of what some of these planets could look like, for example we know their mass, their radius, we might even have some spectral information on their atmospheres, we know how much energy they get from their star, and we might even know something about the star’s composition. This information hints at the composition of the planetary disk from which planets are made, and how much radioactive heating they may experience during their later evolution. Putting all these pieces together gives us several clues on how the planets may have evolved over time, and is comparable to the wealth of information we had of our neighbouring planets before the age of space exploration.

However, in contrast to our Solar System, we cannot (at least not with our technological standard of today) travel to these planets. The only way we can learn more about exoplanets is if we combine geophysical, thermodynamical and astrophysical models – derived and tested for Earth and the Solar System – and apply them to exoplanet systems.

 

Artist’s impression of TRAPPIST-1e, ©NASA

One exoplanet system that is quite intriguing is the TRAPPIST-1 system, which has been observed by several different space and ground-based telescopes including TRAPPIST (short for TRAnsiting Planets and PlanetesImals Small Telescope, or otherwise known as a European monastery-brewed beer) and the Spitzer Space Telescope.

The system contains at least 7 small, densely-packed planets around an 8 Gyr old M dwarf. All planets have masses and radii close to Earth – from TRAPPIST-1c and -1h, which are both ¾ the radius of Earth, to TRAPPIST-1g, which is 13% larger than Earth. For comparison, Venus, our sister planet, has a radius 5% smaller than Earth, and Mars, our small brother planet, is only half the size of Earth. And the greatest news: TRAPPIST-1 is actually in our direct neighbourhood, only 39 light years away. This is literally around the corner! For comparison, our closest neighbour planet outside the Solar System is Proxima Centauri b with a distance of 4.2 light years. Its star belongs to a system of three stars, the most well-known of which is Alpha Centauri, the closest star outside the Solar System. Some day, it may actually be in our reach to travel to both the Centauri system as well as TRAPPIST-1. So we should learn now as much about these planets as possible.

What makes the TRAPPIST-1 planets truly peculiar are their tight orbits around the star – the closest planet orbits at a distance of 0.0011 AU – so only 0.1 percent of Earth’s orbit. Even the furthest planet in the system discovered so far– TRAPPIST-1h – has an orbit of only 0.0063 AU. In our Solar System, Mercury, the closest-in planet, orbits at a distance of 0.39 AU. Does this mean that the planets are boiling up due to their close orbit? Not necessarily, since TRAPPIST-1 is a very dim red M dwarf, which emits much less light than the Sun in our system. If we would place Earth around this red M dwarf star, it would actually need to orbit at a distance of about 0.0022 AU to receive the same incident flux from the star. Actually, if we look at the possible distances from the star, where (depending on the atmosphere greenhouse effect) liquid water at the surface could theoretically exist for a somewhat Earth-like atmosphere (that is, composed of gases such as CO2 and N2), TRAPPIST-1d, -1e, -1f and -1g could potentially contain liquid water at the surface and would thus be habitable places where Earth-like life could, in principle, form. Of course, for that to occur several other factors have to be just right, as well. This zone, where liquid water at the surface could exist, is called the Habitable Zone or Temperate Zone, and is indicated in green in the illustration of the TRAPPIST-1 system compared to the inner Solar System below.

TRAPPIST-1 system compared to the inner Solar System below showing the green region of a Habitable Zone. © Caltech/NASA

So, should we already book our trip to TRAPPIST-1? Well, there are other factors that may endanger the possible habitability of these otherwise fascinating planets. First of all, TRAPPIST-1 is really different from the Sun. Although it is much dimmer and redder, it still emits almost the same amount of harsh X-ray and extreme ultra violet radiation as our Sun, and in addition, produces powerful flares. For the TRAPPIST-1 planets, which are so close to their star, it means that their atmospheres are exposed to much higher levels of short wavelength radiation, which is known to lead to very strong atmospheric escape. A nitrogen-dominated atmosphere, like the one Earth has, would likely not be stable on the TRAPPIST-1 planets in the habitable zone due to exposure to this short wavelength radiation for gigayears, so carbon dioxide Venus-like atmospheres are more probable. Besides that, stellar wind of TRAPPIST-1 may be very dense at planetary orbits, powering strong non-thermal escape from planetary atmospheres and leading to further erosion of the atmosphere.

Another interesting feature of the M dwarfs, especially such low-mass ones as TRAPPIST-1, is their extremely slow evolution. On the one hand, this means very long main-sequence life times of such stars, with stable radiation levels for many gigayears. Could this maybe allow very sophisticated life forms to evolve? On the other hand, when these stars are young, they go through a contraction phase before entering main sequence, which is much longer than the contraction phase of G-dwarfs such as the Sun. During this phase, the stars are much brighter and hotter than later in their history. For TRAPPIST-1 planets this would mean they have been grilled by hot temperatures for about a billion years! Can they still retain some water after such a violent past? Can life form under such conditions? We don’t really know. In any event, it seems that water retaining and delivery might be a critical factor for TRAPPIST-1 planets’ habitability.

Since the planets are so densely-packed in the system, the masses of neighbouring planets as well as the mass of the star have a gravitational effect on each other – just as the Moon leads to high and low tides of Earth’s oceans. Only, the tidal forces acting on the TRAPPIST-1 planets would be much stronger, and could lead to immense energy being released in the interior of the planets due to tidal dissipation. Furthermore, the star itself appears to have a strong magnetic field. An electrical current is produced if a conductive material is embedded in a changing magnetic field, which is used, for example, to melt iron in induction furnaces. Similarly, the mantle of rocky planets are conductive and can experience enhanced energy release deep in the upper mantle due to induction heating.

Both induction heating and tidal heating can have a negative effect on the potential habitability of a rocky planet, since strong heating in the interior can be reflected by equally strong volcanic activity at the surface. This would lead to a hellish surface to live on! The interior may even be partly molten, leading to subsurface oceans of magma, which actually may be the case for TRAPPIST-1b and -1c. Even TRAPPIST-1d may be affected by strong volcanic events due to both induction and tidal heating of the interior. TRAPPIST-1f, -1g and -1h might be too cold at the surface to have liquid water, and might rather resemble our water-rich icy moons orbiting around Saturn and Jupiter. Hence TRAPPIST-1e, which receives only a little less stellar flux compared to Earth, may be the most interesting planet to visit in the system.

But what would life look like on such a planet?

The tidal forces described above also lead to a different effect: the planets would always face the star with only one side (this is called a tidal lock). Therefore, planets would have a day side that was always facing the star, and a night side immersed into eternal darkness and where no light ray is ever received from the star. Such a tidally-locked orbit is similar to the Moon-Earth system, as the Moon shows us always the same face – the “near-side” of the Moon. The other side, the “far-side” of the Moon, is only known to us due to lunar space missions. Can you imagine living at a place where it never gets dark? On the other hand, the luminosity from the star is very weak. Life on the TRAPPIST-1 planets might therefore actually look different than on Earth. To obtain the needed photons used in photosynthsesis (if this process would also evolve on these planets), life might evolve to favour a large variety of pigments that would enable it to make use of the full range of visible and infrared light – in other words, plants on these planets would appear black to us.

TRAPPIST-1 planets certainly still harbour many mysteries. They are a very good example how diverse the planets in the universe can be. If we set our imagination free… Black trees under the red star in the sky, which never sees a sunrise or sunset, powerful volcanoes filling the air with the ash and shaking the ground.
Very different from our Earth, isn’t it?

Further reading:

Kislyakova, K., Noack, L. et al. Magma oceans and enhanced volcanism on TRAPPIST-1 planets due to induction heating. Nature Astronomy 1, 878-885 (2017).

Gillon, M. et al. Seven temperate terrestrial planets around the nearby ultracool dwarf star TRAPPIST-1. Nature 542, 456-460 (2017).

Kiang, N. et al. The Color of Plants on Other Worlds. Scientific American, April 2008, 48-55 (2008).

Barr, A.C. et al. Interior Structures and Tidal Heating in the TRAPPIST-1 Planets. Astronomy and Astrophysics, in press.

Luger, R. et al. A seven-planet resonant chain in TRAPPIST-1. Nature Astronomy 1, 0129 (2017).

Scalo, J. et al. M stars as targets for terrestrial exoplanet searches and biosignature detection. Astrobiology 7(1), 85-166 (2007).

Ramirez, R.M. and Kaltenegger, L. The habitable zones of pre-main-sequence stars. The Astrophysical Journal Letters 797(2), L25 (2014).

On the influence of grain size in numerical modelling

On the influence of grain size in numerical modelling

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. This month Juliane Dannberg from Colorado State University, discusses the influence of grain size and why it is important to consider it in numerical models. Do you want to talk about your research? Contact us!

Juliane Dannberg

When I started my PhD on geodynamic modelling, I was not aware that the size of mineral grains was something I might need to consider in my simulations. To me, grain size was something that sedimentologists need to describe rocks, and not something I had to deal with in my computations. In all the modelling papers I had read, if the mineral grain size was even mentioned, it was always assumed to be constant. However, it turns out that these tiny grains can have huge effects.

I first heard about the importance of grain size in a series of lectures given by Uli Faul when I participated in the CIDER summer program in 2014 (in case you’re interested, the lectures were recorded, and are available here and here). Primarily, I learned that for diffusion creep – the deformation mechanism people predominantly use in convection models – the viscosity does not only depend on temperature, but is also strongly controlled by the grain size, and that this grain size varies both in space and in time.

This made me think. If grain size in the mantle changes by several orders of magnitude, and the viscosity scales with the grain size to the power of 3, didn’t that mean that grain size variations could cause huge variations in viscosity that we do not account for in our models? Shouldn’t that have a major effect on mantle dynamics, and on the evolution of mantle plumes and subduction zones? How large are the errors we make by not including this effect? I was perplexed that there was such a major control on viscosity I had not thought about before, and wanted to look into this further myself.

Luckily, the multidisciplinary nature of CIDER meant that there were a number of people who could help me answer my questions. I teamed up with other participants1 interested in the topic, and from them I learned a lot about deformation in the Earth’s mantle.

How does the mantle deform?
For most of the mantle, the two important deformation mechanisms are diffusion and dislocation creep. In diffusion creep, single defects (or vacancies) – where an atom is missing in the lattice of the crystal – move through this lattice and the crystal deforms. For this type of deformation, the strain rate is generally proportional to the stress, which means that the viscosity does not depend on the strain rate. Many global mantle convection models use this kind of rheology, because it is thought to be dominant in the lower mantle, and also because it means that the problems described using this rheology are usually linear, which makes them easier to solve numerically. Furthermore, this is also the deformation mechanism that depends on the grain size. Usually, it is assumed that the viscosity scales with the grain size to the power of 3:

where ndiff is the diffusion creep viscosity, d is the grain size, m=3, T is the temperature, P is the pressure, and Adiff, E*diff, V*diff and R are constants.

However, if the stress is high (or the grain size is large, Figure 1), dislocation creep is the dominant deformation mechanism. In dislocation creep, linear defects – so-called dislocations – move through the crystal and cause deformation. In this regime, the viscosity depends on the strain rate, but not on grain size. Dislocation creep is generally assumed to be the dominant deformation mechanism in the upper mantle.

Figure 1: Deformation mechanisms in olivine

Why do grains grow or shrink?

In general, from an energy standpoint, larger crystals are more stable than smaller crystals, and so crystals tend to grow over time in a process called Ostwald ripening. The smaller the grains are, and the higher the temperature, the faster the grains grow. One the other hand, the propagation of dislocations through the grains causes so-called dynamic recrystallization, which reduces the grain size if the rock deforms due to dislocation creep. This means that there are always the competing mechanisms of grain growth and grain size reduction, and their balance depends on the dominance of either of the two deformation mechanisms described above – diffusion or dislocation creep:

The left-hand side of the equation describes the change of the grain size d over time, the first term on the right-hand side is grain growth (depending on grain size and temperature), the second term describes grain size reduction (depending the strain rate, the stress, and also on the grain size itself). The parameters Pg, kg, Eg, Vg, R, λ, c and γ are all constants.

If the flow field does not change, grains will evolve towards an equilibrium grain size, balancing grain growth and grain size reduction. In addition, the grain size may change when minerals cross a phase transition. If the mineral composition does not change upon crossing a phase transition (a polymorphic phase transition such as olivine–wadsleyite), there is almost no effect on grain size. But if the composition of the mineral that is stable after crossing the transition is different from the one before, the mineral breaks down, and the grain size is reduced, probably to the micrometer-scale [Solomatov and Reese, 2008].

And what does that mean for the dynamics of the Earth?

As there is a complex interaction between grain size evolution, mantle rheology and the deformation in the mantle, it is not straightforward to predict how an evolving grain size changes mantle dynamics. But it turned out that there had been a number of modelling studies investigating this effect. And they indeed found that grain size evolution may substantially influence the onset and dynamics of convection [Hall and Parmentier, 2003], the shape of mantle plumes [Korenaga, 2005], mixing of chemical heterogeneities [Solomatov and Reese, 2008], the seismic structure of the mantle [Behn et al., 2009], and the convection regime and thermal history of terrestrial planets [Rozel, 2012].

The long way to a working model…

But even knowing all of these things, it was still a long way to implement these mechanisms in a geodynamic modelling code, testing and debugging the implementation, and applying it to convection in the Earth. There were several reasons for that:

Firstly, large viscosity contrasts are already a problem for most solvers we use in our codes, and the strong dependence of viscosity on grain size means that viscosity varies by several orders of magnitude over a very small length scale in the model.

Secondly, considering an evolving grain size makes the problem we want to solve strongly nonlinear: Already in models with a diffusion–dislocation composite rheology and a constant grain size, the viscosity – which is needed to calculate the solution for the velocity – depends on the strain rate, making the momentum conservation equation nonlinear. But an evolving grain size introduces an additional nonlinearity: The viscosity now also depends (nonlinearly) on the grain size, whose evolution in turn depends on the velocity field. In terms of dynamics, this means that there is now another mechanism that can localize deformation. If the strain rate is large, the grain size is reduced due to dynamic recrystallization (as described above). A smaller grain size means a lower viscosity, which again enables a larger strain rate. Due to this feedback loop, velocities can become very high, up to several meters per year, which severely limits the time step size of a numerical model.

Finally, the equation (2) that describes grain size evolution is an ordinary differential equation in itself, and the time scales of grain growth and grain size reduction can be much smaller than changes in the flow field in the mantle. So, in order to model grain size evolution and mantle convection together, one has to come up with a way to separate these scales, and use a different (and probably much smaller) time step to compute how the grain size evolves. I remember that at one point, our models generated mineral grains the size of kilometers (whereas the grain sizes we expect in the mantle are on the order of millimeters), because we had not chosen the time step size properly. And on countless occasions, the code would just crash, because the problem was so nonlinear that a small change in just one parameter or a solution variable had such a large impact that material properties, velocities and pressures went outside of the range of what was physically reasonable.

However, after a lot of debugging, we could finally investigate how an evolving grain size would influence mantle dynamics. But see for yourself below. In an example from our models, plumes become much thinner when reaching the upper mantle, and cause much more vigorous small-scale convection when they interact with the lithosphere. Slabs bend rather than thicken, and accumulate as piles at the core-mantle boundary.

Figure 2: Comparison of plumes and slabs in models with and without grain size evolution. Modified from Dannberg et al., 2017

Of course, there are also many other areas where grain size evolution is important, and many recent studies are concerned with the influence of grain size on the Earth’s dynamic evolution. Dave Bercovici and his collaborators found that grain evolution and damage mechanisms may be a key factor for the onset of plate tectonics [e.g. Bercovici and Ricard, 2014, 2016]: Grain size reduction in shear zones could make them weak enough to for subduction initiation. The evolution of grain size may also be a major factor for focusing of melt to mid-ocean ridges [Turner et al., 2017], as it influences how fast the solid matrix can dilate and compact to let melt flow in and out. And if the Large Low Shear Velocity Provinces at the core-mantle boundary are indeed piles of hot material that are stable on long time scales, mineral grains would have a long time to grow and may play a crucial role for pile stability [Schierjott et al., 2017].

So if you do not include grain size evolution in your geodynamic models – which in many cases is just not feasible to do – I hope that you now have a better feeling for how that may affect your model results.

1The other researchers in my CIDER group were Zach Eilon, Ulrich Faul, Rene Gassmöller, Raj Moulik and Bob Myhill. I learned a lot about grain size in the mantle in particular from Bob Myhill and Ulrich Faul; I developed the geodynamic models together with Rene Gassmöller, and Zach Eilon and Raj Moulik investigated how the evolving grain size predicted by our models would influence seismic observations.

References:

Solomatov, V. S., & Reese, C. C. (2008). Grain size variations in the Earth's mantle and the evolution of primordial chemical heterogeneities. Journal of Geophysical Research: Solid Earth, 113(B7).

Hall, C. E., & Parmentier, E. M. (2003). Influence of grain size evolution on convective instability. Geochemistry, Geophysics, Geosystems, 4(3).

Korenaga, J. (2005). Firm mantle plumes and the nature of the core–mantle boundary region. Earth and Planetary Science Letters, 232(1), 29-37.

Behn, M. D., Hirth, G., & Elsenbeck, J. R. (2009). Implications of grain size evolution on the seismic structure of the oceanic upper mantle. Earth and Planetary Science Letters, 282(1), 178-189.

Rozel, A. (2012). Impact of grain size on the convection of terrestrial planets. Geochemistry, Geophysics, Geosystems, 13(10).

Dannberg, J., Eilon, Z., Faul, U., Gassmöller, R., Moulik, P., & Myhill, R. (2017). The importance of grain size to mantle dynamics and seismological observations. Geochemistry, Geophysics, Geosystems.

Bercovici, D., & Ricard, Y. (2014). Plate tectonics, damage and inheritance. Nature, 508(7497), 513-516.

Bercovici, D., & Ricard, Y. (2016). Grain-damage hysteresis and plate tectonic states. Physics of the Earth and Planetary Interiors, 253, 31-47.

Turner, A. J., Katz, R. F., Behn, M. D., & Keller, T. (2017). Magmatic focusing to mid-ocean ridges: the role of grain size variability and non-Newtonian viscosity. arXiv preprint arXiv:1706.00609.

Schierjott, J., Rozel, A., & Tackley, P. (2017, April). Toward unraveling a secret of the lower mantle: Detecting and characterizing piles using a grain size-dependent, composite rheology. In EGU General Assembly Conference Abstracts (Vol. 19, p. 17433).