GD
Geodynamics

News & Views

How the EGU works: Experiences as GD Division President

How the EGU works: Experiences as GD Division President

In a new regular feature, Paul Tackley,  president of the EGU geodynamics division, writes about his role as a president, and gives us an insider’s view on how EGU works and is preparing for the future. 

Paul Tackley. Professor at ETH Zürich and EGU geodynamics division president. Pictured here giving an important scientific talk, or maybe at karaoke. Your pick.

Stepping into the role of GD Division President has given me a big learning experience about how the European Geosciences Union is run and about how members are represented and can participate. Here I convey some impressions, give a quick overview of how EGU functions and the role of division presidents, and mention a few other activities you may not be aware of.

Firstly, I was impressed just how much a bottom-up organisation the EGU is – how it is run by members for the benefit of members. EGU employs only 7 full-time staff – very few compared to the 140+ employed by the American Geophysical Union! Thus, most of the organisation is run my volunteers, including the big jobs of President, Vice-President, Treasurer and General Secretary, and also the presidents of the 22 scientific divisions and members of eight committees. Of course, the fact that so few staff are needed is helped by the fact that Copernicus (the company) deals with publishing all the journals and organising the General Assembly (GA), and Copernicus has 54 employees.

Secondly, I now appreciate that EGU does a lot more beyond organising the General Assembly and publishing 18 open access journals. In particular, EGU is active in the areas of Education and Outreach, and supports various Topical Events, with each area coordinated by a committee. Additionally, a Diversity and Equality working group was recently set up. I encourage you to read more about these various activities on EGU’s web site.

What must a division president do?  The main tasks are to organise the division’s scientific programme at the General Assembly, and to attend three EGU Council + Programme Committee meetings per year: short ones at the General Assembly, and longer (2-3 days) ones in October near Munich, and in January somewhere warm (such as Nice or Cascais). Practically, this involves sitting in a darkened room for 2-3 days with a lot of other people (there are many other members in addition to the division presidents, including early career scientists) listening to information of variable interest level and discussing and making decisions (voting) when necessary. The EGU Council discusses the full range of EGU activities, so meetings consist of a series of reports: from the president, the treasurer, the various committees, the ECS representative, etc., often with much time spent discussing and voting on new points and developments that arise. Programme Committee meetings are focussed on the General Assembly, both discussing general issues and accomplishing the specific tasks of finalising the list of sessions (October meeting) and the session schedule (January meeting). Throughout all these meetings, I have found the council members to be very collegial and constructive in trying to do what is best for improving EGU activities and making optimal arrangements for the General Assembly (although of course, opinions about what is best can vary). Additionally, Copernicus is continually improving their online tools to make scheduling easier.

The President Alberto Montanari, Programme Committee Chair Susanne Buiter and Copernicus Managing Director Martin Rasmussen, celebrating the EGU General Assembly.

I am happy that there are several other people actively taking care of various tasks in the GD Division. Division officers stimulate sessions in their respective areas of the GA programme and judge the Outstanding Early Career Scientist Award nominations, while judging of the OSPP (Outstanding Student Poster and Pico) awards is organised by an Early Career Scientist (now Maelis Arnould). Our Early Career Scientists are incredibly active, maintaining this blog and the Facebook page, and organising social events at the GA. Finally, the Medal committee decides the winner of the Augustus Love Medal.

Changes are ongoing at EGU! In a multi-year process the finances are being moved from France to Germany, a complicated process as described by our Treasurer at the GA Plenary session. Moving the EGU office (where the 7 people work) from a confined space on the campus of Ludwig Maximilian University of Munich to a much larger modern office premises is happening around now and will allow some expansion of the staff and a suitable space to greet visitors. In the longer term, it may be necessary to move the location of the General Assembly from Vienna due to the ever-increasing number of attendees!

To conclude, EGU is our organisation and we can contribute to the running of it and the decision-making process, so I encourage you to get involved and to make your views about possible future improvements or other issues known to your representative (i.e. me, or our Early Career Scientist representative Nicholas Schliffke). And if anyone wants to take over as the next GD Division President, (self-)nominations can be submitted starting in September with the vote coming in November!

Programme Committee of EGU, which includes its chair, all the division presidents, the executive board, key people from Copernicus and Programme Committee Officers including the ECS representative and OSPP coordinator.

The geodynamic processes behind the generation of the earliest continents

The geodynamic processes behind the generation of the earliest continents

The earliest continents played a fundamental role on Earth’s habitability. However, their generation is still not understood, and it requires an integrated approach between petrology and geodynamic modelling. In a new study, Piccolo and co-workers developed a method to handle the effects of chemical evolution on the geodynamic processes. They show that the production of the earliest felsic crust triggers a self-feeding chain of events that leads to the generation of the first proto-continents.

The Archean Eon (4.0-2.5) is the first act of the evolution of life and represent one of the first steps of Earth towards its present-state. Only few remnants have reached us representing different space-time windows that are difficult to reconcile in a unique interpretative framework. Many questions are still unsolved, and all the answers may be interconnected. One of them is related to the generation of the continental crust, which played a fundamental role on the developing of the biosphere as we know now. Although the present-day sites of production are all most understood, we still try to grasp the dominant process that were generating its constituents (e.g. granitoids) during the Archean. Herein the main “hot” topic is the felsic components of the continental crust,  (i.e. felsic crust/melts), and their generation during the Archean, following the recent results from Piccolo et al. (2019) [1].

Felsic melts cannot be directly produced from a partially molten mantle source; therefore, silica-rich magma generation must occur via a multistage process [2] that comprises at least two main steps: i) extraction of raw mafic materials; ii) differentiation via fractional crystallization or via partial melting of the hydrated basalts. Both processes come at costs, and, in fact the generation of felsic melts implies the production of large volume of complementary mafic/ultramafic dense residuum (the solid fraction of a partially melted rock), which is not observed in geological records. Such residual rocks have a different chemical composition with respect to the parental magma, being more mafic, and thus potentially producing denser minerals. During the Archean the most widely accepted recipe to produce felsic crust is to bake hydrated meta-basalts at high pressure until they partially melt [3]–[5].

Available Archean geological records are not enough to provide a coherent picture. Therefore, it is natural to assist the interpretation of available data with indirect means such as numerical and petrological modelling. Moreover, it is necessary to employ an integrated approach between geodynamic and petrology to study the effect of the chemical evolution on the overall dynamics of the lithosphere. In Piccolo et al. (2019), the authors accepted this challenge by conjugating petrological phase diagrams with geodynamic simulations to handle a simple — but petrologically robust — chemical evolution of the mafic protolith. They produce a set of genetically linked phase diagrams for both mantle and mafic crustal compositions — exploiting the recent advances on thermodynamic modelling for mafic/ultramafic systems (for further details, [5]–[8]) — to simulate the chemical evolution as a function of the melt extracted. Then, they modify their finite element code (MVEP2) to change the compositional phase as a function of the principal mineralogical transformation.

The base line scenario comprise a lithosphere (composed by hydrated effusive basalts, intrusive rocks and a residual lithospheric mantle) underlaid  by a fertile asthenosphere featuring high temperature and radiogenic heat production consistently with Archean conditions [9], [10]. During the initial stage of the experiments, asthenosphere self-heats above solidus, resulting in the genesis of more mafic melts that are readily extracted and converted into mafic hydrated basalts and dry intrusions. The latter are emplaced within the lower crust and heat the overlying hydrated units that start to partially melt. Such two-steps processes have two main consequences: the production of the first granitoids that are emplaced at middle-crust depths and the production of large amount of dense mafic residuum. Such rocks are not so common in the geological record, as stated above. Therefore, the inevitable question is: what are the processes that assists their disposal? The solution of this conundrum relies on the density contrast between the underlying mantle and complementary mafic residuum. If the density contrast is sufficient high, any geometrical and thermal perturbation between them can trigger Rayleigh-Taylor drip instabilities. The foundering of such drip into the mantle forces the weak asthenosphere to upwell causing more mafic melts and thus inducing more felsic crust production and consequent gravitational instabilities. After a few million years the mantle is quiescent because its temperature significantly drops because it mixes with the dripped mafic while the crust start being more enriched of felsic crustal components. Several parameters have been tested, and the main chains of events have been replicated in a plethora of conditions, especially at lower mantle temperature, suggesting that the generation of felsic crust is associated to mantle cooling events, that may buffer the upper mantle temperature.

Melt coming from the astenosphere percolates through the lithosphere, heating and both plastically and thermally weakening it. Part of the mafic melts are emplaced in the lower crust, while the remnants are erupted. As soon as a critical amount of residuum is reached, RTIs are triggered, and asthenosphere fills the space left by the delaminated lithosphere. This results in a feedback between mantle melting and new felsic crust production. From ref. [1].

The removal of the residuum implies production of more mafic melts, and thus more felsic crust that leads inevitably to again to gravitational instabilities yielding a self-feeding. Moreover, any vertical tectonic setting fuelled by mantle magmatic processes is suitable to produce proto-continents thank to these feedbacks. Such findings are consistent with many independent line of research (e.g., [11], [12], [13]) and the novelty of this study relies in a first attempt to conjugate realistic petrological constraint with the chemical evolution induced by melt extraction.

References:

1. A. Piccolo, R. M. Palin, B. J. P. Kaus, and R. W. White, “Generation of Earth’s early continents from a relatively cool Archean mantle,” Geochemistry, Geophys. Geosystems, 2019.
2. R. L. Rudnick and S. Gao, “4.1 – Composition of the Continental Crust,” in Treatise on Geochemistry, 2014, pp. 1–51.
3. J.-F. Moyen and G. Stevens, “Experimental constraints on TTG petrogenesis: implications for Archean geodynamics,” Archean Geodyn. Environ., pp. 149–175, 2006.
4. R. P. Rapp, N. Shimizu, and M. D. Norman, “Growth of early continental crust by partial melting of eclogite,” Nature, vol. 425, pp. 605–609, Oct. 2003.
5. R. M. Palin, R. W. White, and E. C. R. Green, “Partial melting of metabasic rocks and the generation of tonalitic???trondhjemitic???granodioritic (TTG) crust in the Archaean: Constraints from phase equilibrium modelling,” Precambrian Res., vol. 287, pp. 73–90, 2016.
6. R. M. Palin, R. W. White, E. C. R. Green, J. F. A. Diener, R. Powell, and T. J. B. Holland, “High-grade metamorphism and partial melting of basic and intermediate rocks,” J. Metamorph. Geol., vol. 34, no. 9, pp. 871–892, 2016.
7. E. C. R. Green, R. W. White, J. F. A. Diener, R. Powell, T. J. B. Holland, and R. M. Palin, “Activity–composition relations for the calculation of partial melting equilibria in metabasic rocks,” J. Metamorph. Geol., vol. 34, no. 9, pp. 845–869, 2016.
8. R. W. White, R. M. Palin, and E. C. R. Green, “High-grade metamorphism and partial melting in Archean composite grey gneiss complexes,” J. Metamorph. Geol., vol. 35, no. 2, pp. 181–195, 2017.
9. C. T. Herzberg, K. C. Condie, and J. Korenaga, “Thermal history of the Earth and its petrological expression,” Earth Planet. Sci. Lett., vol. 292, no. 1–2, pp. 79–88, 2010.
10. J. Ganne and X. Feng, “Primary magmas and mantle temperatures through time,” Geochemistry, Geophys. Geosystems, vol. 18, no. 3, pp. 872–888, Feb. 2017.
11. J. H. Bédard, “A catalytic delamination-driven model for coupled genesis of Archaean crust and sub-continental lithospheric mantle,” Geochim. Cosmochim. Acta, vol. 70, no. 5, pp. 1188–1214, 2006. 12. T. E. Zegers and P. E. van Keken, “Middle Archean continent formation by crustal delamination,” Geology, vol. 29, no. 12, pp. 1083–1086, 2001. 13. E. Sizova, T. V. Gerya, K. Stüwe, and M. Brown, “Generation of felsic crust in the Archean: A geodynamic modeling perspective,” Precambrian Res., vol. 271, pp. 198–224, 2015.

An industrial placement as a geodynamicist

An industrial placement as a geodynamicist

After years of trying to get a PhD, publishing papers, networking with professors, and trying to land that one, elusive, permanent job in science, it can be quite easy to forget that you actually do have career options outside of academia. To get a little taste of this, Nico Schliffke, PhD student in geodynamics at Durham University, tries out the industry life for a few weeks!

When coming close to the final stages of a PhD life, many students reconsider whether they want to stay in academia or prefer to step over to industry or other non-academic jobs. This is surely not a simple decision to take, as it could strongly shape your future. In this blog post, I would like to report my industrial placement experience during my PhD and share a few thoughts on the topic.

The taste of industry life was an opportunity I had within the frame of my PhD project. Split into two terms, I spent four weeks at a medium-sized company developing optical imaging techniques (both software and equipment) to measure flow fields and deformation. The branch I worked in was “digital image correlation” (DIC) which measures strain on given surfaces purely by comparing successive images on an object (see figure below). This technique is used in both industry (crash tests, quality assessments, etc.) as well as in academia (analogue experiments, wind tunnels, engineering..), and has the substantial advantage of measuring physical properties precisely, without using any materials or affecting dynamical processes. DIC is not directly related to or used in my PhD (I do numerical modelling of subduction zones and continental collision), but surprisingly enough I was able to contribute more than expected – but more on that later.

Basic principle of ‘digital image correlation’. A pattern on a digital image is traced through time on successive images to calculate displacements and strain rates.
Credit: LaVision

The specific project I worked on was inspired by the analogue tectonics lab at GFZ Potsdam, that uses DIC measuring systems to quantify and measure the deformation of their sandbox experiments. Typical earthquake experiments like the figure below span periods of a few minutes to several days during which individual earthquakes occur in a couple of milliseconds. The experiment is continuously recorded by cameras to both monitor deformation visually and quantify deformation by using the optical imaging technique developed by my host company. To resolve the deformation related to individual earthquakes, high imaging rates are required which in turn produce a vast amount of data (up to 2TB per experiment). However, only a small fraction (max. 5%) of the entire dataset is of interest, as there is hardly any deformation during interseismic periods. The project I was involved in tried to tackle the issue of unnecessarily cluttered hard discs: the recording frequency should be linked to a measurable characteristic within the experiment, e.g. displacement velocities in these specific experiments, and controlled by the DIC software.

Setup of the analogue experiment to model earthquakes in subduction zones (courtesy of Michael Rudolf). Cameras above the experiment measure deformation and strain rates by tracking patterns on the surface created by the contrast of black rubber and white sugar.

My general task during the internship was to develop this idea and the required software. We finally developed a ‘live-extensometer’ to calculate displacements between two given points of an image during recording and link its values to the camera’s recording frequency. Therefore, restricting high imaging rates to large (and fast) displacements of earthquakes should result in reducing the total amount of data acquired for a typical earthquake experiment by 95%. However, we needed an actual experiment to verify this. So, I met up with the team at GFZ to test the developed feature.

The main experiment the GFZ team had in mind is sketched in the figure above: a conveyor belt modelling a subducting slab continuously creates strain in the ‘orogenic wedge’ which is released by earthquakes leading to surface deformation. Cameras above the experiment monitor the surface while software computes strain rates and displacement (see figure below). The developed feature of changing frequencies during the experiment depending on slip rates was included and worked surprisingly well. Yet freshly programmed software is seldom perfect: minor issues and bugs crept up during the experiments. My final contribution during the internship was to report these problems back to the company to be fixed.

Displacement measured by ‘digital image correlation’ during an earthquake lasting ~5 ms (courtesy of Mathias Rosenau).

My geodynamical background allowed me to contribute to various fields within the company and resulted in various individual tasks throughout the internship: coding experience helped with discussing ideal software implementations and testing the latest implemented software on small (physical) experiments. My knowledge of various deformation mechanisms and geosciences in general, with its numerous subdisciplines and methods, provided a solid base for searching further applications for the developed software within academia, but also in industry. Last but not least, pursuing my own large project (my PhD) strongly facilitated discussing possible future development steps.

The atmosphere at the company in general was very pleasant and similar to what I experienced at the university: relaxed handling, pared with discussion how to improve products or use of new techniques that might be applicable to a problem. To stay competitive, the company needs to further develop their products which requires a large amount of research, developments and innovative ideas. Meetings to discuss further improvements of certain products were thus scheduled on a (nearly) daily basis. On the one hand this adds pressure to get work done as quickly as possible, but working on a project as a team with many numerous areas of expertise is also highly exciting.

This internship help reveal the variability of possible jobs that geodynamicists can have in industry besides the ‘classical’ companies linked to exploration, tunnel engineering or geological surveys. The skill set acquired in a geodynamical PhD (coding, modelling, combining numerics, physics, and geosciences) makes a very flexible and adaptive employee which is attractive to companies who are so specialised, that there is (nearly) no classical education at university level. Jobs at small to medium-sized companies are often harder to find, but it’s just as difficult for the companies to find suitable candidates for their open positions. Hence, it may be worth searching in-depth for a suitable job, if you are considering stepping out of academia and maybe even out of geoscience as well.

If PhD students are hesitant whether to stay in academia or change into industry, I would advise to do such a short internship with a company to get a taste of ‘the other side’. During a PhD, we get to know academic life thoroughly but industry mostly remains alien. Besides giving a good impression of daily life at a company and how you can contribute, an industry internship might also widen your perspective of which areas might be relevant to you, your methodology and your PhD topic. In total, this internship was definitely a valuable experience for me and will help when deciding: academia or industry?


Here are a few links for more information:
Host company
Digital Image Correlation
TecLab at GFZ Potsdam
Previous EGU blog post interviews of former geoscientists


Thoughts on geological modelling: an analogue perspective

Thoughts on geological modelling: an analogue perspective

In geodynamics we study the dynamics of the Earth (and other planets). We ground our studies in as much data as possible, however we are constrained by the fact that pretty much all direct information we can collect from the interior of the Earth only shows its present-day state. The surface rock record gives us a glimpse into the past dynamics and evolution of our planet, but this record gets sparser as we go back in time. This is why it is common to use modelling in geodynamics to fill this gap of knowledge. There are different types of modelling, and this week João Duarte writes about the importance of analogue modelling. 

João Duarte. Researcher at Instituto Dom Luiz and Invited Professor at the Geology Department, Faculty of Sciences of the University of Lisbon. Adjunct Researcher at Monash University.

The first time I went to EGU, in 2004, I presented a poster with some new marine geology data and a few sets of analogue models. I was doing accretionary wedges in sandboxes. At the time, I was in the third year of my bachelor’s degree and I was completely overwhelmed by the magnitude of the conference. It was incredible to see the faces of all those scientists that I used to read articles from. But one thing impressed me the most. Earth Sciences were consolidating as a modern, theoretical based science. The efforts in trying to develop an integrated dynamic theory of plate tectonics as part of mantle convection were obvious. The new emergent numerical models looked incredible, with all those colours, complex rheologies and stunning visualization that allowed us to “see” stresses, temperature gradients and non-linear viscosities. I was amazed.

Analogue modelling was at a relative peak in 2004, however it was also anticipated by some that it would quickly disappear (and indeed several analogue labs have closed since). It was with this mindset, that I later did the experiments for my PhD, which I finished in 2012 (Duarte et al., 2011). But I was fortunate. My supervisors, Filipe Rosas and Pedro Terrinha, took me to state-of-art labs, namely Toronto and Montpellier (lead at the time by Sandy Cruden and Jacques Malavieille, respectively), and I started to develop a passion for this kind of models. When I moved to Monash for a post-doc position, in 2011, this turned out to be a great advantage. There, modelers such as Wouter Schellart, Louis Moresi, Fabio Capitanio, David Boutelier and Sandy Cruden (yes, I met Sandy again at Monash) were using analogue models to benchmark numerical models. Why? Because many times, even though numerical models produce spectacular results, they might not be physically consistent. And there is only one way to get rid of this, which is to make sure that whatever numerical code we are using can reproduce simple experiments that we can run in a lab. The classical example is the sinking of a negatively buoyant sphere in a viscous medium.

Sandbox analogue model of an accretionary wedge. Part of the same experiment as shown in the header figure. Here, a sliced section cut after wetting, is shown. University of Lisbon, 2009. Experiments published in Duarte et al. (2011).

That was what we were doing at Monash. I worked with Wouter Schellart in the development of subduction experiments with an overriding plate, which were advancing step by step in both analogue and numerical schemes (see e.g., Duarte et al., 2013 and Chen et al., 2015, 2016 for the analogue models, and Schellart and Moresi, 2013 for numerical equivalents). The tricky bit was, we wanted to have self-consistent dynamic experiments in which we were ascribing the forces (negative buoyancy of the slab, the viscosity of the upper mantle, etc) and let the kinematics (i.e. the velocities) to be an emergent phenomenon. So, no lateral push or active kinematic boundaries were applied to the plates. This is because we now recognize that in general, it is the slab pull at subduction zones that majorly drives the plates and not the other way around. Therefore, if we want to investigate the fundamental physics and dynamics of subduction zones we need to use self-consistent models (both analogue and numerical). In order to carry out these models, we had to develop a new rheology for the subduction interface, which is a complex problem, both in the analogue and the numerical approaches (Duarte et al. 2013, 2014, 2015). But this is another very long story that would lead to a publication by itself.

Analogue models of subduction with an overriding plate and an interplate rheology. Monash University, 2012. Adapted from Duarte et al. (2013)

But what is analogue modelling all about? Basically, analogue models are scaled models that we can develop in the laboratory using analogue materials (such as sand), and that at the scale that we are doing our models have similar physical properties to those of natural materials (such as brittle rocks). But, as it is widely known, under certain circumstances (at large time and space scales), rocks behave like fluids, and for that we use analogue fluids, such as silicone putties, glucose and honey. We can also use fluids to simulate the interaction between subduction zones and mantle plumes in a fluid reservoir (see below figures and links to videos of scaled experiments using three different fluids to study slab-plume interaction; Meriaux et al., 2015a, 2015b, 2016). These are generally called geodynamic analogue models.

End of a slab-plume experiment in the upper mantle (see below). The tank is partially filled with glucose. The slab (laying at the analogue 660 discontinuity) is made of silicone mixed with iron powder. The plume is made of a water solution of glucose dyed with a red colorant. And that’s me on the left. Monash University, 2014.

I usually consider two main branches of analogue models. The first, which is the one mostly used by geologists, was started by Sir James Hall (1761 – 1832), that squeezed layers of clay to reproduce the patterns of folded rocks that he had observed in nature. This method was later improved by King Hubbert (1937), who laid the ground for the development of the field by developing a theory of scaling of analogue models applied to geological processes.

The other branch is probably as old as humans. It began when we started to manipulate objects and using them to understand basic empirical laws, such as the one that objects always fall. When Galileo was using small spheres in inclined surfaces to extract the physical laws that describe the movement of bodies, from rocks to planets, he was in a certain way using analogue models. He understood that many laws are scale invariant. Still today, these techniques are widely used by physicist and engineers when understanding for example the aerodynamics of airplanes, the stability of bridges, the dynamics of rivers or the resistance of dams. They use scaled models that reproduce at suitable laboratory scales the objects and processes that they are investigating.

What we did at Monash, was a mixture of both approaches. Though, we were less interested in exactly reproducing nature from a purely geometric and kinematic point of view, but we were more interested in understanding the physics of the object we were investigating: subduction zones. Therefore, we had to guarantee that we were using the correct dynamical approach in order to be able to extract generic physical empirical laws, hoping that these laws would provide us some insight on the dynamics of natural subduction zones. These empirical laws could readily be incorporated in numerical models, which would then help exploring more efficiently the space of the controlling parameters in the system.

Slab-Plume interaction in the upper mantle. Experiments published in Meriaux et al. (2015a, 2015b).

I want to finish with a question that I believe concerns all of us: are there still advantages in using analogue models? Yes, I believe so! One of the most important advantages is that analogue models are always three-dimensional and high-resolution. Furthermore, they allow a good tracking of the strain and to understand how it occurs in discontinuous mediums, for example when investigating the localization of deformation or the propagation of cracks. Numerical schemes still struggle with these problems. It is very difficult to have an efficient code that can deal simultaneously with very high resolution and large-scale three-dimensional problems, as it is required to investigate the process of subduction. Nevertheless, numerical models are of great help when it comes to track stresses, and model complex rheologies and temperature gradients. To sum up: nowadays, we recognize that certain problems can only be tackled using self-consistent dynamic models that model the whole system in three-dimensions, capturing different scales. For this, the combination of analogue and numerical models is still one of the most powerful tools we have. An interesting example of a field in which a combined approach is being used is the fascinating investigations on the seismic cycle (for example, see here).

Links to videos:

VIDEO 1: https://www.youtube.com/watch?v=U1TXC2XPbFA&feature=youtu.be
(Subduction with an overriding plate and an interplate rheology. Duarte et al., 2013)

VIDEO 2: https://www.youtube.com/watch?v=n5P2TzS6h_0&feature=youtu.be
(Slab-plume interaction at mantle scale. Side-view of the experiment on the top, and top-view of the experiment on the bottom. Meriaux et al., 2016)
References:

Chen, Z., Schellart, W.P., Strak, V., Duarte, J.C., 2016. Does subduction-induced mantle flow drive backarc extension? Earth and Planetary Science Letters 441, 200-210. https://doi.org/10.1016/j.epsl.2016.02.027

Chen, Z., Schellart, W.P., Duarte, J.C., 2015. Overriding plate deformation and variability of forearc deformation during subduction: Insight from geodynamic models and application to the Calabria subduction zone. Geochemistry, Geophysics, Geosystems 16, 3697–3715. DOI: 10.1002/2015GC005958

Duarte, J.C., Schellart, W.P., Cruden, A.R., 2015. How weak is the subduction zone interface? Geophysical Research Letters 41, 1-10. DOI: 10.1002/2014GL062876

Duarte, J.C., Schellart, W.P., Cruden, A.R., 2014. Rheology of petrolatum – paraffin oil mixtures: applications to analogue modelling of geological processes. Journal of Structural Geology 63, 1-11. https://doi.org/10.1016/j.jsg.2014.02.004

Duarte, J.C., Schellart, W.P., Cruden, A.R., 2013. Three-dimensional dynamic laboratory models of subduction with an overriding plate and variable interplate rheology. Geophysical Journal International 195, 47-66. https://doi.org/10.1093/gji/ggt257

Duarte, J.C., F.M. Rosas P., Terrinha, M-A Gutscher, J. Malavieille, Sónia Silva, L. Matias, 2011. Thrust–wrench interference tectonics in the Gulf of Cadiz (Africa–Iberia plate boundary in the North-East Atlantic): Insights from analog models. Marine Geology 289, 135–149. https://doi.org/10.1016/j.margeo.2011.09.014

Hubbert, M.K., 1937. Theory of scale models as applied to the study of geologic structures. GSA Bulletin 48, 1459-1520. https://doi.org/10.1130/GSAB-48-1459

Meriaux, C., Meriaux, A-S., Schellart, W.P., Duarte, J.C., Duarte, S.S., Chen, Z., 2016. Mantle plumes in the vicinity of subduction zones. Earth and Planetary Science Letters 454, 166-177. https://doi.org/10.1016/j.epsl.2016.09.001

Mériaux, C.A., Duarte, J.C., Schellart, W.P., Mériaux, A-S., 2015. A two-way interaction between the Hainan plume and the Manila subduction zone. Geophysical Research Letters 42, 5796–5802. DOI: 10.1002/2015GL064313

Meriaux, C.A., Duarte, J.C., Duarte, S., Chen, Z., Rosas, F.M., Mata, J., Schellart, W.P., and Terrinha, P. 2015. Capture of the Canary mantle plume material by the Gibraltar arc mantle wedge during slab rollback. Geophysical Journal International 201, 1717-1721. https://doi.org/10.1093/gji/ggv120

Schellart, W.P., Moresi, L., 2013. A new driving mechanism for backarc extension and backarc shortening through slab sinking induced toroidal and poloidal mantle flow: Results from dynamic subduction models with an overriding plate. Journal of Geophysical Research: Solid Earth 118, 3221-3248. https://doi.org/10.1002/jgrb.50173