GD
Geodynamics

Geodynamics

The Sassy Scientist – Publishing Lulls

The Sassy Scientist – Publishing Lulls

Every week, The Sassy Scientist answers a question on geodynamics, related topics, academic life, the universe or anything in between with a healthy dose of sarcasm. Do you have a question for The Sassy Scientist? Submit your question here or leave a comment below.

Through an overwhelmingly frustrating waiting period, first due to an editor that went AWOL with an unresponsive email account as a result, and then due to my interlude on the earthquake cycle, Candide furiously asks:


My paper is taking forever to be published. Do you know any way to speed up the process?


Dear Candide,

Let’s suppose I’m an editor for some journal and I have been awarded your manuscript to review. After finally having found some reviewers that accepted to do this job, I have another twenty papers I am supervising. Whilst waiting on reviews to come back, I’ve got journal administrators on my back because my reject/accept quotum is way too high; I’ve been disallowing a lot of papers. Mostly because of a lack of … let’s call it “improvement of scientific understanding”. Do you think you’re the only one who isn’t particularly thrilled by the peer review process? Join the club. Disgruntled due to the duration of the process? Write a better paper. Irritated by limited response? Yes, you’re the only one sending me an email. The only one. You’re unique in the world.

I got a little bit side-tracked there. Let’s regroup. Sending a myriad of emails to editors and journal administrators will not always result in a positive outcome for you. Unfortunately, there simply isn’t a proper way to speed up the process, other than submitting an absolute pearl of a paper in the first place. As I am sure you thought you did. Even though job security for early career scientist is … let’s say not great, and productivity is a major factor in the decision process, there are no widespread – nor outspoken – special conditions for papers submitted by early career scientists. Since it is certainly fair to consider that the priority of shuffling these papers through the system is fairly low – I mean, other scientists who also depend on publication lists to obtain grands and such also want their papers published asap – maybe it is not unreasonable to assume that official journal guidelines will not change on this principle. It’s just another one sliding somewhere in the never-ending pile. You’re then left dependent on the editor-at-large. Exercising patience is the only thing left to do. Whilst conferences, and especially workshops, have recognized the need for additional focus on (mostly) PhD students, we’re left hanging by the journals. Through the promise of optimism in the background, that this sour taste of a lack of early career scientist sympathy will be washed away by the sweet taste of expeditious peer review and knowledgeable legislation, the reality of a pragmatic solution is wavering. I wonder whether the EGU journals cannot take the lead on improving this…

Yours truly,

The Sassy Scientist

PS: This post was written whilst waiting on my own editor to respond… Thanks Iris. Bosses, right?

Enigmas at depth

Enigmas at depth
Dr. Marcel Thielmann

Dr. Marcel Thielmann.

The Geodynamics 101 series serves to showcase the diversity of research topics and/or methods in the geodynamics community in an understandable manner. In this week’s Geodynamics 101 post, Marcel Thielmann, Senior Researcher at the University of Bayreuth, discusses the possible mechanisms behind the ductile deformation at great depths that causes deep earthquakes. 

Earthquakes are one of the expressions of plate tectonics that everybody seems to be familiar with. When I started studying geophysics, people used to ask me what exactly I was studying. As soon as I mentioned earthquakes, I usually got a knowing nod and no further questions were asked (the same goes for volcanoes, but that’s a topic for another day).

Global hypocenter distribution over earthquakes with a magnitude of 5 or larger.

Figure 1: Global hypocentre distribution of earthquakes with a magnitude Mw>5 in the ISC catalogue for the interval 1960-2013. The x-axis has been truncated for better visibility.

 

Most earthquakes occur at the boundaries of tectonic plates, where rock breaks due to the forces originating from the plates’ relative movement. In 1928, Kiyoo Wadati discovered earthquakes that occurred at depths larger than 60 km, which were previously thought to be impossible. Today, we know that these earthquakes are not that extraordinary: about one out of four earthquakes observed on Earth occurs at depths larger than 60 km. At this depth, the pressure inside the Earth reaches values of about 3 GPa and more. Laboratory experiments have shown that at this pressure, rocks do not deform by breaking, but rather by ductile creep, like putty. This kind of deformation should not produce any earthquakes. So, 90 years after their discovery, the question still remains: What causes deep earthquakes?

How do rocks fail at these high pressures?

Proposed ductile failure mechanisms

Figure 2: Schematic view of the three proposed ductile failure mechanisms.

As rocks get transported to larger depths, the minerals making them up can experience phase transformations. Due to these transformations, two things may happen: (1) Previously stored water in the minerals is released. This release may trigger earthquakes due to the released water acting against the pressure of the surrounding rock in a mechanism called dehydration embrittlement (Green and Houston, 1995; Frohlich, 2006). (2) The phase transition renders a fine-grained rock that is easier to deform. If enough of this weak material is produced, rock failure occurs in a process called transformational faulting (Green and Houston,1995; Ferrand et al.,2017). Besides these two mechanisms, a third one called thermal runaway has been thrown into the ring (Hobbs et al., 1986; Ogawa, 1987). This mechanism is a result of shear heating, which describes the generation of heat inside a deforming rock. If heat generation is faster than its transport, temperatures inside the rock will continue to increase and ultimately result in its destabilization, thus causing an earthquake.

The Wind River earthquake

While most of the observed deep earthquakes occur in subduction zones, where one tectonic plate descends beneath another, there are some that occur far from them. One such earthquake hit the Wind River range in Wyoming with a magnitude of MW 4.7 in 2013 (Frohlich et al., 2015; Craig and Heyburn, 2015). This earthquake is not only enigmatic due to its depth of 75 km (making it the second deepest earthquake in such a stable continental region), but also because the Wind River area is considered to be “seismically quiet”. The location of the earthquake is far away from any plate boundary, with the closest tectonic feature being the Yellowstone supervolcano more than 200 km away. Since it occurred, the cause of this earthquake has been a matter of debate, with some scientists preferring a purely brittle origin (Craig and Heyburn, 2015), while others argue for a ductile mechanism (Prieto et al., 2017).

Dehydration embrittlement seems to be an unlikely candidate, since the earthquake is located far away from any subduction zone. How could fluids get down to those depths if not by subduction? Transformational faulting also seems to be unlikely, since this would require a phase transition to take place. The Wind River earthquake occurred in the continental mantle lithosphere, where we would not expect any major phase transitions. Thermal runaway may be a candidate, but studies have shown that very high stresses are required to make this mechanism work, stresses that are very hard to achieve in the Earth.

However, there may be a way out: grain size assisted thermal runaway. Oh no, yet another one you might think. But fear not, this mechanism is essentially the same as the “classical” thermal runaway, just with the effect of small grains included. The consequences of this effect are by no means small however, as it significantly reduces the stresses required for thermal runaway. Indeed, numerical models of this process at the conditions of the Wind River earthquake indicate that it may indeed be a viable mechanism to have generated this earthquake (Thielmann, 2018). However, these models also show that rock deformation has to be sufficiently fast (about 100 times faster than what is commonly assumed) in order to allow for earthquake generation.

Location and mantle structure of the 2013 Wind River earthquake.

Figure 3: Location and mantle structure of the 2013 Wind River earthquake. Inset: Location within the north-western US. Black points represent earthquakes larger than Mw 4.5 from the NEIC catalogue. The red circle indicates the location of the Wind River earthquake. The red box denotes the region of the main figure. Main Figure: Seismic velocity structure and hypocentre location. Tomographic data is taken from Shen et al. (2013). Colours denote seismic velocities, with blue colours indicating faster and red colours slower velocities. Fast seismic velocities are commonly associated with colder and denser material. The red spheres denote the location of the hypo- and epicentre. The grey isosurface at 4.4 km/s delineates the dense body extending to larger depths.

So now we have shifted the question from “How could fluids get down to those depths if not by subduction?” to “How could we deform that fast at those depths?” Here, seismology may come to the rescue: tomographic models of the north-western United States show that the Wind River earthquake lies directly at the transition between two regions with strongly varying seismic wave speeds (Shen et al., 2013; Wang et al., 2016). Fast wave speeds are commonly seen as an indicator for cold material, while slow wave speeds indicate warm material. 3D seismic tomographies such as the one from Shen et al. (2013) show that the 2013 Wind River earthquake occurred in a region where the continental lithosphere may be detaching in the form of a drip (Wang et al., 2016). In such tectonic environments, deformation rates may reach the values needed to initiate grain size assisted thermal runaway (Lorinczi and Houseman, 2009).

Does this now answer all questions we have on the Wind River earthquake and deep earthquakes in general? Certainly not. The example given above was just a single instance of where the combined information from seismology, laboratory experiments and numerical modelling may help us find an answer. We still have to keep in mind G.E.P. Box’s famous expression „Essentially, all models are wrong, but some are useful“. It is certain that deep earthquakes contain a wealth of information that remains to be unlocked. The following quote by Heidi Houston (2015) points the way:

Integration of seismological, laboratory, and modelling effort is needed to bridge the stubborn gap between source properties, which are extracted under strong assumptions and possess substantial intrinsic variability, and physical mechanisms of rupture generation, which are as yet neither well understood nor well constrained. (H. Houston)

 

References

Craig, T. J., and R. Heyburn (2015), An enigmatic earthquake in the continental mantle lithosphere of stable North America, Earth Plan. Sc. Lett., 425, 12–23, doi:10.1016/j.epsl.2015.05.048.

Ferrand, T., N. Hilairet, S. Incel, D. Deldicque, L. Labrousse, J. Gasc, J. Renner, Y. Wang, H. W. Green II, and A. Schubnel (2017), Dehydration-driven stress transfer triggers intermediate-depth earthquakes, Nat. Commun., 8, 15247, doi:10.1038/ncomms15247.

Frohlich, C. (2006), Deep Earthquakes, Cambridge University Press.

Frohlich, C., W. Gan, and R. B. Herrmann (2015), Two Deep Earthquakes in Wyoming, Seismological Research Letters, 86(3), 810–818, doi:10.1785/0220140197.

Green, H. W., and H. Houston (1995), The Mechanics of Deep Earthquakes, Annu. Rev. Earth. Planet. Sci., 23, 169–213.

Hobbs, B. E., A. Ord, and C. Teyssier (1986), Earthquakes in the Ductile Regime, Pure Appl. Geophys., 124(1-2), 309–336.

Houston, H. (2015), 4.13 - Deep Earthquakes, in Treatise on Geophysics (Second Edition), edited by G. Schubert, pp. 329–354, Elsevier, Oxford.

Lorinczi, P., and G. A. Houseman (2009), Lithospheric gravitational instability beneath the Southeast Carpathians, Tectonophysics, 474, 322–336, doi:10.1016/j.tecto.2008.05.024.

Ogawa, M. (1987), Shear instability in a viscoelastic material as the cause of deep focus earthquakes, J. Geophys. Res., 92, 13,801–13,810.

Prieto, G. A., B. Froment, C. Yu, P. Poli, and R. Abercrombie (2017), Earthquake rupture below the brittle-ductile transition in continental lithospheric mantle, Sci. Adv., 3(3), e1602642, doi:10.1126/sciadv.1602642.

Shen, W., M. H. Ritzwoller, and V. Schulte Pelkum (2013), A 3‐D model of the crust and uppermost mantle beneath the Central and Western US by joint inversion of receiver functions and surface wave dispersion, J. Geophys. Res., 118(1), 262–276, doi:10.1029/2012JB009602.

Thielmann, M. (2018), Grain size assisted thermal runaway as a nucleation mechanism for continental mantle earthquakes: Impact of complex rheologies, Tectonophysics, 746, 611–623, doi:10.1016/j.tecto.2017.08.038.

Wang, X., D. Zhao, and J. Li (2016), The 2013 Wyoming upper mantle earthquakes: Tomography and tectonic implications, J. Geophys. Res., 121(9), 6797–6808, doi:10.1002/2016JB013118.

The Sassy Scientist – Incompetency Check

The Sassy Scientist – Incompetency Check

Every week, The Sassy Scientist answers a question on geodynamics, related topics, academic life, the universe or anything in between with a healthy dose of sarcasm. Do you have a question for The Sassy Scientist? Submit your question here or leave a comment below.

After reading up on many of the aspects described for the earthquake cycle that were oftentimes presented through fundamental observations and theories over the past weeks, Oleg found himself in a state of self-reflection and asked:


I’m afraid I’ve forgotten all my maths and physics skills during my PhD in geodynamics. Should I be worried?


Dear Oleg,

I would be. How did you get through those years? I gather that you must be using some numerical model, since analogue modelling, seismology and geodesy (yes … even geology) requires at least some basic maths and/or physics skills. Thank God just about anyone can use numerical models. You must be working with one of those standard black-box numerical codes that also magically produces statistical measurements on how well your models perform. It’s a good thing that the point of doing a PhD is just about blindly producing some papers by employing some methodology that you can use. Or is it about underpinning observations/theories/concepts through the fundamentals (i.e., maths and physics)? Have you been able to make a decent interpretation of what your model means? Or is that something your supervisor or co-authors have constantly done? If the former I wouldn’t worry too much. If it’s the latter it sure sounds exceptionally irresponsible from your supervisor and promotor.

It’s probably not as bad as you ponder right this moment. Everybody — i.e., every starting scientist — has a tendency to underestimate their skills and you’ve probably remembered more details uttered during your undergrad courses you’ve slept through than you give yourself credit for. I suppose that, if you’re terribly in despair, you just need to take some time and go back to look up some of the books you must have used in your undergrad years: brush up your basic linear algebra and vector analysis. The world’s a tensor; deal with it. To make it worth while, you might want to look up isostasy, Euler poles, continuum mechanics and statistics. Grasping these concepts provides you with the fundamentals so that you can study geodynamics. You’ll find that you have applied some, or all, of these at some point in your research. An abundance of researchers nowadays perform all kinds of fancy statistical tricks, employing fancy numerical codes and have a fancy way of writing it down. The one thing they lack is actual, fundamental understanding. Don’t be a lackey of the — intentionally — ignorant, an acolyte of the small-minded, a minion of the depraved. Raise your voice above the murmur of mediocrity (which sadly is the base level for most publications nowadays). Just dissect your previous work and outline shortcoming in maths and physics. Scrutinise the text books, and I’m sure you’ll get there. Remember that no one knows everything. As long as you are honest with yourself about what you do and don’t know or understand, you can have fair discussions about your research and you will learn. In the end, that’s all that research is: learning and learning what you do not know (yet). Actually, the pinnacle of science is overcoming our ignorance.

Yours truly,

The Sassy Scientist

PS: This post was written after being shaken and stirred about the impossible possibility which is forgetting math and physics during a PhD in geodynamics.

Is the scientific community ready for open access publishing?

Is the scientific community ready for open access publishing?

How much we pay, as both scientists and the public, for publishing and accessing papers is a hot topic right across the academic community – and rightly so. Publishing houses, and their fees, are big, big business. To which journal we should submit our work is a regular decision we face. But what are the Green, Golden or Hybrid roads? How do pre- and post-prints fit into the journey? In this week’s GD blog post, Martina Ulvrova (Marie Skłodowska Curie Research Fellow, ETH Zürich) shares some of the background, considerations, and discussions surrounding some of the topics surrounding publishing open access. Which road will you take? 

Imagine that you just uncovered some exciting results and you want to publish them. But where? Which journal should you consider? What are the best peer-reviewed options right now to not only disseminate scientific research to your peers but also disseminate science to a larger audience? How much are you, as a publishing scientist, willing to pay for this, and how much should the community in turn be expected to pay to read your results?

First, let’s take a look at how publishing traditionally works. The standard publishing model could be considered a genius business model. Why ‘genius’? Well, scientists send papers to journals without being rewarded. Forget about any honorarium for your new discovery. In addition, most journals in the arena charge you a publication fee once your paper is accepted. From the other side, members of the scientific community have reviewed the submitted publications and assessed the novelty of the research. This is also being done without any honorarium.

The remaining piece for the publishing houses is to decide to whom should they sell the publications to? Here again, the scientific community enters a game that is controlled by the publishers. In today’s scientific and publishing climate, an individual scientist’s career heavily depends on the number of publications that they produce. And in turn, to undertake and publish a study, you need to have access to existing publications across numerous journals. Scientists are a perfect market to sell the publications, that were produced more-or-less for free and on a voluntary basis, to. Universities and research institutions pay subscriptions to journals. And the subscription fees are anything but low. One of the biggest publishing groups, Elsevier, reported a profit over 1 billion euros in 2017, and thus paying millions of euros in dividends to its shareholders. Something seems wrong here, doesn’t it? Indeed, the scientific community at-large is starting to wake up and demand changes. The big question is how to change the existing embedded system and from where should this change come? In addition to these questions, a critical unknown here is: are we, as a scientific community, ready for that change? 

One of the problematic sides of the whole publishing wheel is that universities pay loads of money to access published papers. This has been controversial for a long time. Indeed, it does not seem right that researchers (and the public) have to buy what they contributed to produce, and furthermore, researcher salaries often come from public money. Instead, research that is a product of public money should be open to everyone since we all pay taxes. Based on similar arguments, huge funding agencies including the European Commission and the European Research Council (ERC) launched Plan S in 2018. Plan S is an initiative with the aim to make publications open access. This means that all publications that are the fruit of public resources should be accessible to anyone without a paywall. This free and open access path will be obligatory for EU funded projects starting as early as 2021. That sounds like a great step that is already happening – but what does it mean exactly to publish open access (OA)? And how can you publish OA?  

There are two main roads that researchers can take that lead them to an OA publication. First one is a so-called Golden Road. On this path, the researchers submit their paper to an Open Access Journal. The paper goes through peer review and once it is accepted, all rights stay with the authors (or their institution). Often, there are Article Processing Charges (APC) on the golden road. A published paper is accessible to everyone immediately, under the CC-license and reuse is possible. For the geodynamics community, an interesting OA journal is Solid Earth that was established in 2010 by the European Geosciences Union and is published by Copernicus Publications. Slowly but surely it is gaining popularity, and this in turn indicates that the scientific community needs and is willing to turn to OA. APC for Solid Earth are around 80 EUR per journal page. Nature Publishing Group started their OA journal, Nature Communications, in 2010 with APC €4,290 per manuscript. Similarly, the AAAS that is behind Science, launched OA journal Science Advances. In this case, be prepared to pay APC of USD 4500, excluding taxes.  

Another path that a scientist can take is called a Green Road. In this case, a manuscript is submitted to a subscription-based journal. It goes through peer review and once it is accepted, all rights are transferred to the publisher. (Side-note: there still might be publication fees, but they are usually much lower than APC OA publishing).  Publication is then accessible to the audience via a paywall. This is problematic especially for small institutions that can only afford to pay subscriptions to a limited number of journals. It is also problematic for universities from developing countries that in general operate with lower budget. However, some publishers, including Elsevier, allow self-archiving post-print. By self-archiving I mean that you publish a post-print on your webpage or blog. Some journals also allow the publication of the post-print in a public repository. By post-print I mean the version of the manuscript that went all the way through the peer review (and it equally includes changes suggested by reviewers), but without any copy-editing or formatting by the journal. This is the Green Road. In addition, some subscription-based journals including, for example, Earth and Planetary Science Letters (EPSL) by Elsevier offer a paid open access option. These are so-called hybrid journals. At EPSL, authors can choose to publish OA for USD 3200, excluding taxes.      

Whatever the road you choose that leads you to a polished peer-reviewed publication, you can always put online pre-print of your manuscript, i.e. a version of your paper that precedes any peer reviewing and typesetting. The best way is to use one of the existing repositories that are out there. Since 1991, ArXiv has existed for the STEM community, and boasts a submission rate that reached more than (an incredible!) 10,000 papers per month. For the geoscience community, we have EarthArXiv devoted to collecting and hosting papers with Earth and planetary related topics. It just celebrated its second birthday in October of this year. It assures rapid communication of your findings and accelerates the research process. And it accepts both pre-and post-prints.  

Why should you care about OA? As I said above, one of the most important aspects of OA publishing is that it is free of charge for readers. This means that you can attract a larger audience; not only is your science read by more people, it may also potentially receive more citations. This increases both your professional and institution’s visibility, and leads to lots of flow-on effects. This is especially important when you are an early career researcher; you want your research to be read and shared as quickly, easily, and as widely as possible. It is also very convenient to be accessing published manuscripts and data without any paywall. A big motivation to publish OA might also be that you do not agree with the current business model of big publishing houses and want to make an affirmative action of change. If you believe that the strategy of publishing houses is outdated and that libraries pay too much, you might want to consider an OA road. 

Reasons for not publishing OA? From the Author Insights Survey in 2015, run by Nature Publishing Group

Of course there are still some cons of OA publishing including that the costs for the authors can be exorbitant, up to more than 4000 EUR. Unfortunately, here again, most disadvantaged institutions are small universities and universities located in countries with lower money at hand. 

Another big issue is how to guarantee the quality of OA publications? Indeed, as shown in a survey of the Nature Publishing Group from 2015 on the attitude of researchers towards OA publishing, a journal’s reputation (specifically its impact factor) is one of the most important factors that influences the choice of where to submit. The most common reason for not publishing in OA journals is that researchers are concerned about perceptions of the quality of the OA publications. However, this is improving and many OA journals are starting to gain a good reputation. This is something that we, as a community, can largely influence. If we send high quality papers to OA journals and cite OA publications, the OA journals will become higher rated and more attractive. Moreover, the publishing houses of the most prestigious journals have finally started to adapt and created the OA journals, e.g. Nature Communications or Science Advances.  

OA publishing has been debated for more than a decade, so where do we stand at the moment? According to the data published by the European Commission, the percentage of OA publications increased from 31% in 2009 to only 36.2% in 2018 (side note: in Earth and related environmental sciences, around 34% of publications were published OA for this period). The United Kingdom, Switzerland and Croatia belong to the countries with the highest proportion of OA publications (slightly higher than 50%). In the meantime, 33% of funding agencies worldwide do not have OA policy, 35% of them encourage OA publishing, while 31% requires OA. These numbers indicate that the system is changing, but it is changing (too) slowly. Current publishing and reading practices are deeply rooted and more action is needed.     

The motivation to change the publishing system and make OA common practice should come from the top, i.e. directed by the funder policy, but also (and more importantly) from the bottom, i.e. from the scientific community. Although the system is complex, the urge to replace the outdated system is present. European agencies have launched a pioneering initiative Plan S to publish OA manuscripts as soon as by 2021 (in Switzerland all scholarly publications funded by public money should be OA by 2024). Although no one knows how we will get there, and which rules should we set, it is worth following the OA road. Let’s explore what works best for the scientific community, that will ultimately result in a sustainable, flourishing, and fair publishing environment. Before that, there is always SciHub to get around the existing paywall if needed. Obviously, a legal road is the preferred one and hopefully it is only a matter of time when we will get on that road. And the sooner the better.    

This is a huge topic of discussion and comments are welcome below. Further thoughts on Plan S in more details, how it evolves, how it is perceived by the community? Parasite journals? How to assess scientific quality based on something else than the number of publications (e.g. Dora)? Or get in touch if you would like to write an entry for us on the GD Blog!