GD
Geodynamics

how science works

Is the scientific community ready for open access publishing?

Is the scientific community ready for open access publishing?

How much we pay, as both scientists and the public, for publishing and accessing papers is a hot topic right across the academic community – and rightly so. Publishing houses, and their fees, are big, big business. To which journal we should submit our work is a regular decision we face. But what are the Green, Golden or Hybrid roads? How do pre- and post-prints fit into the journey? In this week’s GD blog post, Martina Ulvrova (Marie Skłodowska Curie Research Fellow, ETH Zürich) shares some of the background, considerations, and discussions surrounding some of the topics surrounding publishing open access. Which road will you take? 

Imagine that you just uncovered some exciting results and you want to publish them. But where? Which journal should you consider? What are the best peer-reviewed options right now to not only disseminate scientific research to your peers but also disseminate science to a larger audience? How much are you, as a publishing scientist, willing to pay for this, and how much should the community in turn be expected to pay to read your results?

First, let’s take a look at how publishing traditionally works. The standard publishing model could be considered a genius business model. Why ‘genius’? Well, scientists send papers to journals without being rewarded. Forget about any honorarium for your new discovery. In addition, most journals in the arena charge you a publication fee once your paper is accepted. From the other side, members of the scientific community have reviewed the submitted publications and assessed the novelty of the research. This is also being done without any honorarium.

The remaining piece for the publishing houses is to decide to whom should they sell the publications to? Here again, the scientific community enters a game that is controlled by the publishers. In today’s scientific and publishing climate, an individual scientist’s career heavily depends on the number of publications that they produce. And in turn, to undertake and publish a study, you need to have access to existing publications across numerous journals. Scientists are a perfect market to sell the publications, that were produced more-or-less for free and on a voluntary basis, to. Universities and research institutions pay subscriptions to journals. And the subscription fees are anything but low. One of the biggest publishing groups, Elsevier, reported a profit over 1 billion euros in 2017, and thus paying millions of euros in dividends to its shareholders. Something seems wrong here, doesn’t it? Indeed, the scientific community at-large is starting to wake up and demand changes. The big question is how to change the existing embedded system and from where should this change come? In addition to these questions, a critical unknown here is: are we, as a scientific community, ready for that change? 

One of the problematic sides of the whole publishing wheel is that universities pay loads of money to access published papers. This has been controversial for a long time. Indeed, it does not seem right that researchers (and the public) have to buy what they contributed to produce, and furthermore, researcher salaries often come from public money. Instead, research that is a product of public money should be open to everyone since we all pay taxes. Based on similar arguments, huge funding agencies including the European Commission and the European Research Council (ERC) launched Plan S in 2018. Plan S is an initiative with the aim to make publications open access. This means that all publications that are the fruit of public resources should be accessible to anyone without a paywall. This free and open access path will be obligatory for EU funded projects starting as early as 2021. That sounds like a great step that is already happening – but what does it mean exactly to publish open access (OA)? And how can you publish OA?  

There are two main roads that researchers can take that lead them to an OA publication. First one is a so-called Golden Road. On this path, the researchers submit their paper to an Open Access Journal. The paper goes through peer review and once it is accepted, all rights stay with the authors (or their institution). Often, there are Article Processing Charges (APC) on the golden road. A published paper is accessible to everyone immediately, under the CC-license and reuse is possible. For the geodynamics community, an interesting OA journal is Solid Earth that was established in 2010 by the European Geosciences Union and is published by Copernicus Publications. Slowly but surely it is gaining popularity, and this in turn indicates that the scientific community needs and is willing to turn to OA. APC for Solid Earth are around 80 EUR per journal page. Nature Publishing Group started their OA journal, Nature Communications, in 2010 with APC €4,290 per manuscript. Similarly, the AAAS that is behind Science, launched OA journal Science Advances. In this case, be prepared to pay APC of USD 4500, excluding taxes.  

Another path that a scientist can take is called a Green Road. In this case, a manuscript is submitted to a subscription-based journal. It goes through peer review and once it is accepted, all rights are transferred to the publisher. (Side-note: there still might be publication fees, but they are usually much lower than APC OA publishing).  Publication is then accessible to the audience via a paywall. This is problematic especially for small institutions that can only afford to pay subscriptions to a limited number of journals. It is also problematic for universities from developing countries that in general operate with lower budget. However, some publishers, including Elsevier, allow self-archiving post-print. By self-archiving I mean that you publish a post-print on your webpage or blog. Some journals also allow the publication of the post-print in a public repository. By post-print I mean the version of the manuscript that went all the way through the peer review (and it equally includes changes suggested by reviewers), but without any copy-editing or formatting by the journal. This is the Green Road. In addition, some subscription-based journals including, for example, Earth and Planetary Science Letters (EPSL) by Elsevier offer a paid open access option. These are so-called hybrid journals. At EPSL, authors can choose to publish OA for USD 3200, excluding taxes.      

Whatever the road you choose that leads you to a polished peer-reviewed publication, you can always put online pre-print of your manuscript, i.e. a version of your paper that precedes any peer reviewing and typesetting. The best way is to use one of the existing repositories that are out there. Since 1991, ArXiv has existed for the STEM community, and boasts a submission rate that reached more than (an incredible!) 10,000 papers per month. For the geoscience community, we have EarthArXiv devoted to collecting and hosting papers with Earth and planetary related topics. It just celebrated its second birthday in October of this year. It assures rapid communication of your findings and accelerates the research process. And it accepts both pre-and post-prints.  

Why should you care about OA? As I said above, one of the most important aspects of OA publishing is that it is free of charge for readers. This means that you can attract a larger audience; not only is your science read by more people, it may also potentially receive more citations. This increases both your professional and institution’s visibility, and leads to lots of flow-on effects. This is especially important when you are an early career researcher; you want your research to be read and shared as quickly, easily, and as widely as possible. It is also very convenient to be accessing published manuscripts and data without any paywall. A big motivation to publish OA might also be that you do not agree with the current business model of big publishing houses and want to make an affirmative action of change. If you believe that the strategy of publishing houses is outdated and that libraries pay too much, you might want to consider an OA road. 

Reasons for not publishing OA? From the Author Insights Survey in 2015, run by Nature Publishing Group

Of course there are still some cons of OA publishing including that the costs for the authors can be exorbitant, up to more than 4000 EUR. Unfortunately, here again, most disadvantaged institutions are small universities and universities located in countries with lower money at hand. 

Another big issue is how to guarantee the quality of OA publications? Indeed, as shown in a survey of the Nature Publishing Group from 2015 on the attitude of researchers towards OA publishing, a journal’s reputation (specifically its impact factor) is one of the most important factors that influences the choice of where to submit. The most common reason for not publishing in OA journals is that researchers are concerned about perceptions of the quality of the OA publications. However, this is improving and many OA journals are starting to gain a good reputation. This is something that we, as a community, can largely influence. If we send high quality papers to OA journals and cite OA publications, the OA journals will become higher rated and more attractive. Moreover, the publishing houses of the most prestigious journals have finally started to adapt and created the OA journals, e.g. Nature Communications or Science Advances.  

OA publishing has been debated for more than a decade, so where do we stand at the moment? According to the data published by the European Commission, the percentage of OA publications increased from 31% in 2009 to only 36.2% in 2018 (side note: in Earth and related environmental sciences, around 34% of publications were published OA for this period). The United Kingdom, Switzerland and Croatia belong to the countries with the highest proportion of OA publications (slightly higher than 50%). In the meantime, 33% of funding agencies worldwide do not have OA policy, 35% of them encourage OA publishing, while 31% requires OA. These numbers indicate that the system is changing, but it is changing (too) slowly. Current publishing and reading practices are deeply rooted and more action is needed.     

The motivation to change the publishing system and make OA common practice should come from the top, i.e. directed by the funder policy, but also (and more importantly) from the bottom, i.e. from the scientific community. Although the system is complex, the urge to replace the outdated system is present. European agencies have launched a pioneering initiative Plan S to publish OA manuscripts as soon as by 2021 (in Switzerland all scholarly publications funded by public money should be OA by 2024). Although no one knows how we will get there, and which rules should we set, it is worth following the OA road. Let’s explore what works best for the scientific community, that will ultimately result in a sustainable, flourishing, and fair publishing environment. Before that, there is always SciHub to get around the existing paywall if needed. Obviously, a legal road is the preferred one and hopefully it is only a matter of time when we will get on that road. And the sooner the better.    

This is a huge topic of discussion and comments are welcome below. Further thoughts on Plan S in more details, how it evolves, how it is perceived by the community? Parasite journals? How to assess scientific quality based on something else than the number of publications (e.g. Dora)? Or get in touch if you would like to write an entry for us on the GD Blog! 

 

Dancing on a volcano – the unspoken scientific endeavour

Dancing on a volcano – the unspoken scientific endeavour

Doing science is not a walk in the park. In fact, it might be closer to dancing on a volcano. Dan Bower, CSH and Ambizione Fellow at the University of Bern, Switzerland, takes full advantage of the creative freedom of a blog post to reiterate that scientific progress is not a straight-forward endeavour.

We all learn early in our education about the scientific method—the scientific approach to discern a new truth of nature by establishing a hypothesis that is then rigorously tested. Clearly this approach has been influential in establishing our wealth of knowledge to date, but it typically does not represent accurately the day-to-day reality of being a practitioner of science. This is because the scientific method implies a linear trajectory from proposing a hypothesis to sequential testing of the hypothesis until we naturally arrive at a conclusion that is a new result, hence providing a contribution to the knowledge database of humanity. It suggests we step through each stage of the method and necessarily arrive at a useful result, but unfortunately, this masks the reality of the daily lives of scientists.

In fact, as scientists our daily life often involves scrambling around on the side of a cantankerous volcano a few minutes before sunset; we have some understanding of where we came from and how we ended up here, but working at the edge of human knowledge is a challenging and unforgiving place. We took wrong steps enroute—some of which might actually turn out to be right steps in retrospect, but in relation to a completely different topic or problem than what we are working on. Rather than dancing elegantly through the different steps of the scientific method, we are instead struggling to see the ground below in the ever-darkening light, often dangling a foot into the unknown to see if we can gain some traction. Depending on your personality type and upcoming deadline schedule, this unknown can be the most invigorating or most stressful place to be in the uncharted landscape of modern science.

We glance at an incomplete map of the terrain to see if the discoveries of our scientific forefathers can cast new light on our scientific objective. We began the excursion optimistically with the goal of reaching the volcano’s crater, but after revising our project description and goals several times, we are now content with the view half-way up the mountain. It’s taken us longer than we thought to reach this point—but with an upcoming conference in a few weeks—we must set up camp, collect some data, and glean a new insight that no other soul on the rest of the planet has previously managed—either now or in the previous several centuries of modern science. The thought of that makes us a little nervous, not least because we now realise that the tools we brought with us are not up to the task following several breakages. There are new tools, but they have only just been delivered to base camp and will not be available for the rest of our project—we also need to obtain permission from an ex-collaborator (now turned competitor) to use them. We instead think of creative solutions to deal with this “challenge” (word of the expedition leader, I’d personally use stronger phrasing). We now iterate relentlessly between models and data until we conjure a new discovery. Well, we are not sure if it is strictly a discovery, but no-one else seems to report it any papers (that we read). We now debate if this is because our “discovery” is mind-blowingly obvious.

A helicopter flies overhead and drops us a few supplies for the remainder of our mountain excursion—including a new paper just published last week on the topic of our research. We panic—is this exactly the same as what we are doing, or a little bit different? Do our results agree? For that matter, do we want the results to agree? We again revise the goals of the project to utilise the one extra data point we have acquired to maximise the impact of our work and thereby justify our study as “a useful contribution to the literature” (again, words of the expedition leader). The title of our paper changes for the thousandth time, and even I am no longer sure I understand what the study is about. As a parting gift, the helicopter pilot informs us that we are not on the volcano we thought we were on—apparently that is a few hundred kilometers in a different direction. How did we end up here again? Not to worry, we can tweak a couple of parameters and then apply our insights to the actual volcano we are standing on—assuming it is actually a volcano—has anyone checked? We now push an excursion to the other volcano to future work, which in reality, means that we hope someone else will do it (but not before we write up our study). In the end of year summary, we report complete success of the project to the funding agency, and request follow-up funding a month later.

Have you ever danced on a volcano? Tweet us your story on convoluted science projects @EGU_GD under the hashtag #DancingOnAVolcano



The featured image of this post is provided by Floor de Goede, a Dutch comic artist who penned the graphic novel ‘Dansen op the vulkaan’ (Dancing on the volcano). He also illustrated many children’s books and draws the semi-autobiographical daily comic ‘Do you know Flo?‘. You can follow Floor de Goede on instagram at @flodego for daily comics (also in English!).


Writing the Methods Section

Writing the Methods Section

An important part of science is to share your results in the form of papers. Perhaps, even more important is to make those results understandable and reproducible in the Methods section. This week, Adina E. Pusok, Postdoctoral Researcher at the Department of Earth Sciences, University of Oxford, shares some very helpful tips for writing the Methods in a concise, efficient, and complete way. Writing up the methods should be no trip to fantasy land!

Adina Pusok. Postdoctoral Researcher in the Department of Earth Sciences, University of Oxford, UK.

For my occasional contribution to the Geodynamics blog, I return with (what I think) another essential topic from The Starter Pack for Early Career Geodynamicists (see end of blog post): how to write the methods section in your thesis, report or publication. Or using the original title: “Writing up the methods should be no trip to fantasy land”. Don’t get me wrong, I love the fantasy genre, but out of an entire scientific manuscript that pushes the boundaries of knowledge (with additional implications and/or speculations), the methods section should be plain and simple, objective and logically described – “just as it is”.

The motivation for this post came some months ago when I was reviewing two articles within a short time interval from each other, and I felt that some of my comments repeated – incomplete methods sections, assumptions let to be inferred by the reader, and which ultimately made assessment of the results more difficult. But I also consider it is not ok to write harsh reviews back (for these reasons), since again, there is little formal training for Early Career Scientists (ECS) on how to write scientific papers. Moreover, even when there is such formal training on academic writing, it is often generalized for all scientific disciplines, ignoring some important field-specific elements. For example, a medical trial methods section will look different from an astrophysics methods section, and within Earth Sciences, the methods section for a laboratory experiment on deformation of olivine will contain different things compared to a systematic study of numerical simulations of subduction dynamics.

A common approach by most students (especially first time) is to dump everything on paper and then hope it represents a complete collection of methods. However, with increasing complexity of studies, this collection of methods has neither heads nor tails, and is prone to errors. Such pitfalls can make the manuscript cumbersome to read or even question the validity of the research. Generally, journals do have guidelines on how the methods should be formatted, how many words, but not necessarily what to contain because it varies from field to field. I believe there should be a more systematic approach to it. So in this post, I aim at describing some aspects of the Methods section, and then propose a structure that (mostly) fits the general Geodynamics studies.

1. The scientific Methods section

The Methods section is considered one of the most important parts of any scientific manuscript (Kallet, 2004). A good Methods section allows other scientists to verify results and conclusions, understand whether the design of the experiment is relevant for the scientific question (validity), and to build on the work presented (reproducibility) by assessing alternative methods that might produce differing results.

Thus, the Methods section has one major goal: to verify the experiment layout and reproduce results.

It is also the first section to be written in a manuscript because it sets the stage for the results and conclusions presented. So, what exactly do you need to include when writing your Methods section? The title by T.M. Annesley (2010) puts it perfectly into words: “Who, what, when, where, how, and why: The ingredients in the recipe for a successful methods section”.

  • Who performed the experiment?
  • What was done to answer the research question?
  • When and where was the experiment undertaken?
  • How was the experiment done, and how were the results analyzed?
  • Why were specific procedures chosen?

Across sciences, the Methods section should contain detailed information on the research design, participants, equipment, materials, variables, and actions taken by the participants. However, what that detailed information consists of, depends on each field.

2. The Methods section for numerical modeling in Geodynamics

I propose below a structure for the Methods section intended for numerical simulations studies in Geodynamics. I want to mention that this structure is meant as a suggestion, especially for ECS, and can be adapted for every individual and study. Geodynamics studies may have different aspects: a data component (collection, post-processing), a theoretical (mathematical and physical) framework, a numerical framework (computational) and an analog component (laboratory experiments). The majority of studies have 1-2 of these components, while few will have all of them. In this post, I will focus primarily on studies that use numerical simulations to address a question about the solid earth, thus having primarily a theoretical and numerical component.

Before I start, I think a great Methods section is like a cake recipe in which your baked cake looks just like the one in the photo. All the ingredients and the baking steps need to be explained precisely and clearly in order to be reproduced. We should aim at writing the Methods with this in mind: if someone were ‘to bake’ (reproduce) my study, could they succeed based on the instructions I provided? There are many ways how to write your Methods, my way is to break it into logical sections, going from theoretical elements to numerical ones.

Proposed structure:

  1. Brief outline – A general paragraph describing the study design and the main steps taken to approach the scientific question posed in the Introduction.
  2. Theoretical framework – Any numerical simulation is based on some mathematical and physical concepts, so it’s logical to start from here. And from the most important to the least important.
    • 2.1 Governing equations – Section describing the conservation of mass, momentum, energy.
    • 2.2 Constitutive equations – Section describing all the other elements entering the conservation equations above such as: rheology (deformation mechanisms), equation of state, phase transformations, etc. Each of these topics can be explained separately in subsections. For example,
      • 2.2.1 Rheology
        • 2.2.1.1 Viscous deformation
        • 2.2.1.2 Plastic deformation
        • 2.2.1.3 Elastic deformation
      • 2.2.2 Phase transformations
      • 2.2.3 Water migration in the models
    • Figures and tables:
      • Table of parameters – for quick definition of parameters used in equations.
  3. Computational framework – Section explaining how the theory (Section 2) is solved on the computer.
    • 3.1 Numerical methods – code details, discretization methods, programming language, solvers, software libraries used, etc. If you are using a community code, these details should be provided in previous publications.
    • 3.2 Model setup – Section describing the layout of the current experiment.
      • 3.2.1 Details: model geometry, resolution (numerical and physical), parameters, initial and boundary conditions, details on rheological parameters (constitutive equations), etc.
      • 3.2.2 Must motivate the choice of parameters – why is it relevant to address the scientific questions?
    • Figures and tables:
      • Table of parameter values, rheological flow laws used.
      • Table with all model details (to reduce text).
      • Figure illustrating the model geometry, initial and boundary conditions.
    • *NOTE: If you are testing/implementing a new feature in the code, you should allocate a new section for it. Also, spend more effort to explain it into details. Do not expect many people to know about it.
  4. Study design – Section describing the layout of the study.
    • 4.1 What is being tested/varied? How many simulations were performed (model and parameter space)? Why perform those simulations/vary those parameters?
    • 4.2 Code and Data availability – code availability, input files or other data necessary to reproduce the simulation results (i.e., installation guides). Many journals today only accept for publication studies in which data and code availability is declared in standard form (i.e., AGU journals). Some other questions to answer here: where were the simulations performed? how many cores? can I reproduce data on laptop/desktop or do I need access to a cluster?
    • Figures and tables:
      • Simulations table – indicating all simulations that were run and which parameters were varied. When the number of simulations is high (i.e., Monte-Carlo sampling) you should still indicate which parameters were varied and the total number of simulations.
  5. Analysis of numerical data – details on visualization/post-processing techniques, and describe how the data will be presented in the results section. This is a step generally ignored, but be open about it: “visualization was performed in paraview/matlab, and post-processing scripts were developed in python/matlab/unicorn language by the author”. If your post-processing methods are more complex, give more details on that too (i.e., statistical methods used for data analysis).

 

Before you think you’ve finished the Methods section, go over your assumptions, and make sure you’ve explained them clearly! Geodynamics is a field in which you take a complex system (Earth or other planetary body) and simplify it to a level that we can extract some understanding about it. And in doing so, we rely on a physically consistent set of assumptions. It is important to bear in mind that this set of assumptions may not always be obvious to the audience. If your reviewers have questions about your methods and interpretation of results (that you think is obvious), it means that something was not clearly explained. Be pre-emptive and state your assumptions. As long as they are explicit and consistent, the reviewers and readers will find less flaws about your study. Why that choice of parameters? Why did you do it that way?

3. A few other things…

It’s good practice to write a complete Methods section for every manuscript, such as one following the structure above. However, some journals will ask for a short version (1-2 paragraphs) to be included in the manuscript and have the complete Methods section in a separate resource (i.e, Supplementary Data, Supporting information, repository) such that it’s made available to the community. For some other journals, it will be difficult to find a balance between completeness (sufficient details to allow replication and validity verification) and conciseness (follow the guidelines by journals regarding word count limits).

To master the writing of the Methods section, it is important to look at other examples with similar scope and aims (especially the ones you understood clearly and completely). It is also a good idea to keep notes and actually start writing up your equations, model setup, and parameters as the study progresses (such as the mandatory lab notebook).

Finally, some tips on the style of writing of the Methods section:

  • be clear, direct, and precise.
  • be complete, yet concise, to make life easy for the reader.
  • write in the past tense.
  • but use the present tense to describe how the data is presented in the paper.
  • may use both active/passive voice.
  • may use jargon more liberally.
  • cite references for commonly used methods.
  • have a structure and split into smaller sections according to topic.
  • material in each section should be organized by topic from most to least important.
  • use figures, tables and flow diagrams where possible to simplify the explanation of methods.

The Starter Pack for Early Career Geodynamicists

In the interest of not letting the dust accumulate, the growing collection of useful Geodynamics ECS posts (from/for the community):

References:

Kallet R.H. (2004) How to write the methods section of a research paper, Respir Care. 49(10):1229-32. https://www.ncbi.nlm.nih.gov/pubmed/15447808

Annesley, T.M. (2010) Who, what, when, where, how, and why: the ingredients in the recipe for a successful Methods section, Clin Chem. 56(6):897-901, doi: 10.1373/clinchem.2010.146589, https://www.ncbi.nlm.nih.gov/pubmed/20378765

On the resolution of seismic tomography models and the connection to geodynamic modelling (Is blue/red the new cold/hot?) (How many pixels in an Earth??)

What do the blobs mean?

Seismologists work hard to provide the best snapshots of the Earth’s mantle. Yet tomographic models based on different approaches or using different data sets sometimes obtain quite different details. It is hard to know for a non specialist if small scale anomalies can be trusted and why. This week Maria Koroni and Daniel Bowden, both postdocs in the Seismology and Wave Physics group in ETH Zürich, tell us how these beautiful images of the Earth are obtained in practice.

Daniel Bowden and Maria Koroni enjoying coffee in Zürich

Seismology is a science that aims at providing tomographic images of the Earth’s interior, similar to X-ray images of the human body. These images can be used as snapshots of the current state of flow patterns inside the mantle. The main way we communicate, from tomographer to geodynamicist, is through publication of some tomographic image. We seismologists, however, make countless choices, approximations and assumptions, which are limited by poor data coverage, and ultimately never fit our data perfectly. These things are often overlooked, or taken for granted and poorly communicated. Inevitably, this undermines the rigour and usefulness of subsequent interpretations in terms of heat or material properties. This post will give an overview of what can worry a seismologist/tomographer. Our goal is not to teach seismic tomography, but to plant a seed that will make geodynamicists push seismologists for better accuracy, robustness, and communicated uncertainty!

A typical day in a seismologist’s life starts with downloading some data for a specific application. Then we cry while looking at waveforms that make no sense (compared to the clean and physically meaningful synthetics calculated the day before). After a sip, or two, or two thousand sips of freshly brewed coffee, and some pre-processing steps to clean up the mess that is real data, the seismologist sets up a measurement of the misfit between synthetics and observed waveforms. Do we try to simulate the entire seismogram, just its travel time, its amplitude? The choice we make in defining this misfit can non-linearly affect our outcome, and there’s no clear way to quantify that uncertainty.

After obtaining the misfit measurements, the seismologist starts thinking about best inversion practices in order to derive some model parameters. There are two more factors to consider now: how to mathematically find a solution that fits our data, and the choice of how to choose a subjectively unique solution from the many solutions of the problem… The number of (quasi-)arbitrary choices can increase dramatically in the course of the poor seismologist’s day!

The goal is to image seismic anomalies; to present a velocity model that is somehow different from the assumed background. After that, the seismologist can go home, relax and write a paper about what the model shows in geological terms. Or… More questions arise and doubts come flooding in. Are the choices I made sensible? Should I make a calculation of the errors associated with my model? Thermodynamics gives us the basic equations to translate seismic to thermal anomalies in the Earth but how can we improve the estimated velocity model for a more realistic interpretation?

What do the blobs mean?

Figure 1: A tomographic velocity model, offshore southern California. What do the blobs mean? This figure is modified from the full paper at https://doi.org/10.1002/2016JB012919

Figure 1 is one such example of a velocity model, constructed through seismic tomography (specifically from ambient-noise surface waves). The paper reviews the tectonic history of the crust and upper mantle in this offshore region. We are proud of this model, and sincerely hope it can be of use to those studying tectonics or dynamics. We are also painfully aware of the assumptions that we had to make, however. This picture could look drastically different if we had used a different amount of regularization (smoothing), had made different prior assumptions about where layers may be, had been more or less restrictive in cleaning our raw data observations, or made any number of other changes. We were careful in all these regards, and ran test after test over the course of several months to ensure the process was up to high standards, but for the most part… you just have to take our word for it.

There’s a number of features we interpret here: thinning of the crust, upwelling asthenosphere, the formation of volcanic seamounts, etc. But it wouldn’t shock me if some other study came out in the coming years that told an entirely different story; indeed that’s part of our process as scientists to continue to challenge and test hypotheses. But what if this model is used as an input to something else as-of-yet unconstrained? In this model, could the Lithosphere-Asthenosphere Boundary (LAB) shown here be 10 km higher or deeper, and why does it disappear at 200km along the profile? Couldn’t that impact geodynamicists’ work dramatically? Our field is a collaborative effort, but if we as seismologists can’t properly quantify the uncertainties in our pretty, colourful models, what kind of effect might we be having on the field of geodynamics?

Another example comes from global scale models. Taking a look at figures 6 and 7 in Meier et al. 2009, ”Global variations of temperature and water content in the mantle transition zone from higher mode surface waves” (DOI:10.1016/j.epsl.2009.03.004), you can observe global discontinuity models and you are invited to notice their differences. Some major features keep appearing in all of them, which is encouraging since it shows that we may be indeed looking at some real properties of the mantle. However, even similar methodologies have not often converged to same tomographic images. The sources of discrepancies are the usual plagues in seismic tomography, some of them mentioned on top.

410 km discontinuity

Figure 2: Global models of the 410 km discontinuity derived after 5 iterations using traveltime data. We verified that the method retrieves target models almost perfectly. Data can be well modelled in terms of discontinuity structure; but how easily can they be interpreted in terms of thermal and/or compositional variations?

In an effort to improve imaging of mantle discontinuities, especially those at 410 and 660 km depths which are highly relevant to geodynamics (I’ve been told…), we have put some effort into building up a different approach. Usually, traveltime tomography and one-step interpretation of body wave traveltimes have been the default for producing images of mantle transition zone. We proposed an iterative optimisation of a pre-existing model, that includes flat discontinuities, using traveltimes in a full-waveform inversion scheme (see figure 2). The goal was to see whether we can get the topography of the discontinuities out using the new approach. This method seems to perform very well and it gives the potential for higher resolution imaging. Are my models capable of resolving mineralogical transitions and thermal variations along the depths of 410 and 660 km?

The most desired outcome would be not only a model that represents Earth parameters realistically but also one that provides error bars, which essentially quantify uncertainties. Providing error bars, however, requires extra computational work, and as every pixel-obsessed seismologist, we would be curious to know the extent to which uncertainties are useful to a numerical modeller! Our main question, then, remains: how can we build an interdisciplinary approach that can justify large amounts of burnt computational power?

As (computational) seismologists we pose questions for our regional or global models: Are velocity anomalies good enough, intuitively coloured as blue and red blobs and representative of heat and mass transfer in the Earth, or is it essential that we determine their shapes and sizes with greater detail? Determining a range of values for the derived seismic parameters (instead of a single estimation) could allow geodynamicists to take into account different scenarios of complex thermal and compositional patterns. We hope that this short article gave some insight into the questions a seismologist faces each time they derive a tomographic model. The resolution of seismic models is always a point of vigorous discussions but it could also be a great platform for interaction between seismologists and geodynamicists, so let’s do it!

For an overview of tomographic methodologies the reader is referred to Q. Liu & Y. J. Gu, Seismic imaging: From classical to adjoint tomography, 2012, Tectonophysics. https://doi.org/10.1016/j.tecto.2012.07.006