GD
Geodynamics

Wit & Wisdom

On the resolution of seismic tomography models and the connection to geodynamic modelling (Is blue/red the new cold/hot?) (How many pixels in an Earth??)

What do the blobs mean?

Seismologists work hard to provide the best snapshots of the Earth’s mantle. Yet tomographic models based on different approaches or using different data sets sometimes obtain quite different details. It is hard to know for a non specialist if small scale anomalies can be trusted and why. This week Maria Koroni and Daniel Bowden, both postdocs in the Seismology and Wave Physics group in ETH Zürich, tell us how these beautiful images of the Earth are obtained in practice.

Daniel Bowden and Maria Koroni enjoying coffee in Zürich

Seismology is a science that aims at providing tomographic images of the Earth’s interior, similar to X-ray images of the human body. These images can be used as snapshots of the current state of flow patterns inside the mantle. The main way we communicate, from tomographer to geodynamicist, is through publication of some tomographic image. We seismologists, however, make countless choices, approximations and assumptions, which are limited by poor data coverage, and ultimately never fit our data perfectly. These things are often overlooked, or taken for granted and poorly communicated. Inevitably, this undermines the rigour and usefulness of subsequent interpretations in terms of heat or material properties. This post will give an overview of what can worry a seismologist/tomographer. Our goal is not to teach seismic tomography, but to plant a seed that will make geodynamicists push seismologists for better accuracy, robustness, and communicated uncertainty!

A typical day in a seismologist’s life starts with downloading some data for a specific application. Then we cry while looking at waveforms that make no sense (compared to the clean and physically meaningful synthetics calculated the day before). After a sip, or two, or two thousand sips of freshly brewed coffee, and some pre-processing steps to clean up the mess that is real data, the seismologist sets up a measurement of the misfit between synthetics and observed waveforms. Do we try to simulate the entire seismogram, just its travel time, its amplitude? The choice we make in defining this misfit can non-linearly affect our outcome, and there’s no clear way to quantify that uncertainty.

After obtaining the misfit measurements, the seismologist starts thinking about best inversion practices in order to derive some model parameters. There are two more factors to consider now: how to mathematically find a solution that fits our data, and the choice of how to choose a subjectively unique solution from the many solutions of the problem… The number of (quasi-)arbitrary choices can increase dramatically in the course of the poor seismologist’s day!

The goal is to image seismic anomalies; to present a velocity model that is somehow different from the assumed background. After that, the seismologist can go home, relax and write a paper about what the model shows in geological terms. Or… More questions arise and doubts come flooding in. Are the choices I made sensible? Should I make a calculation of the errors associated with my model? Thermodynamics gives us the basic equations to translate seismic to thermal anomalies in the Earth but how can we improve the estimated velocity model for a more realistic interpretation?

What do the blobs mean?

Figure 1: A tomographic velocity model, offshore southern California. What do the blobs mean? This figure is modified from the full paper at https://doi.org/10.1002/2016JB012919

Figure 1 is one such example of a velocity model, constructed through seismic tomography (specifically from ambient-noise surface waves). The paper reviews the tectonic history of the crust and upper mantle in this offshore region. We are proud of this model, and sincerely hope it can be of use to those studying tectonics or dynamics. We are also painfully aware of the assumptions that we had to make, however. This picture could look drastically different if we had used a different amount of regularization (smoothing), had made different prior assumptions about where layers may be, had been more or less restrictive in cleaning our raw data observations, or made any number of other changes. We were careful in all these regards, and ran test after test over the course of several months to ensure the process was up to high standards, but for the most part… you just have to take our word for it.

There’s a number of features we interpret here: thinning of the crust, upwelling asthenosphere, the formation of volcanic seamounts, etc. But it wouldn’t shock me if some other study came out in the coming years that told an entirely different story; indeed that’s part of our process as scientists to continue to challenge and test hypotheses. But what if this model is used as an input to something else as-of-yet unconstrained? In this model, could the Lithosphere-Asthenosphere Boundary (LAB) shown here be 10 km higher or deeper, and why does it disappear at 200km along the profile? Couldn’t that impact geodynamicists’ work dramatically? Our field is a collaborative effort, but if we as seismologists can’t properly quantify the uncertainties in our pretty, colourful models, what kind of effect might we be having on the field of geodynamics?

Another example comes from global scale models. Taking a look at figures 6 and 7 in Meier et al. 2009, ”Global variations of temperature and water content in the mantle transition zone from higher mode surface waves” (DOI:10.1016/j.epsl.2009.03.004), you can observe global discontinuity models and you are invited to notice their differences. Some major features keep appearing in all of them, which is encouraging since it shows that we may be indeed looking at some real properties of the mantle. However, even similar methodologies have not often converged to same tomographic images. The sources of discrepancies are the usual plagues in seismic tomography, some of them mentioned on top.

410 km discontinuity

Figure 2: Global models of the 410 km discontinuity derived after 5 iterations using traveltime data. We verified that the method retrieves target models almost perfectly. Data can be well modelled in terms of discontinuity structure; but how easily can they be interpreted in terms of thermal and/or compositional variations?

In an effort to improve imaging of mantle discontinuities, especially those at 410 and 660 km depths which are highly relevant to geodynamics (I’ve been told…), we have put some effort into building up a different approach. Usually, traveltime tomography and one-step interpretation of body wave traveltimes have been the default for producing images of mantle transition zone. We proposed an iterative optimisation of a pre-existing model, that includes flat discontinuities, using traveltimes in a full-waveform inversion scheme (see figure 2). The goal was to see whether we can get the topography of the discontinuities out using the new approach. This method seems to perform very well and it gives the potential for higher resolution imaging. Are my models capable of resolving mineralogical transitions and thermal variations along the depths of 410 and 660 km?

The most desired outcome would be not only a model that represents Earth parameters realistically but also one that provides error bars, which essentially quantify uncertainties. Providing error bars, however, requires extra computational work, and as every pixel-obsessed seismologist, we would be curious to know the extent to which uncertainties are useful to a numerical modeller! Our main question, then, remains: how can we build an interdisciplinary approach that can justify large amounts of burnt computational power?

As (computational) seismologists we pose questions for our regional or global models: Are velocity anomalies good enough, intuitively coloured as blue and red blobs and representative of heat and mass transfer in the Earth, or is it essential that we determine their shapes and sizes with greater detail? Determining a range of values for the derived seismic parameters (instead of a single estimation) could allow geodynamicists to take into account different scenarios of complex thermal and compositional patterns. We hope that this short article gave some insight into the questions a seismologist faces each time they derive a tomographic model. The resolution of seismic models is always a point of vigorous discussions but it could also be a great platform for interaction between seismologists and geodynamicists, so let’s do it!

For an overview of tomographic methodologies the reader is referred to Q. Liu & Y. J. Gu, Seismic imaging: From classical to adjoint tomography, 2012, Tectonophysics. https://doi.org/10.1016/j.tecto.2012.07.006

Demystifying the Peer-Review Process

Demystifying the Peer-Review Process

Adina Pusok

An important and inevitable aspect of being in academia is receiving a request to peer-review a paper. And much like the papers we write and submit, retaining structure and clarity for the review itself is important. This week Adina E. Pusok, Postdoctoral Researcher at Scripps Institution of Oceanography, UCSD, and our outgoing GD ECR representative, shares some detailed and helpful tips for writing a concise, efficient, and informative review. Do check out her very helpful Peer-review checklist PDF for download!

 

It is somewhat surprising that the peer-review process, a fundamental part of science (which prides itself on technical and objective methods), is usually left up to the individual reviewing scientist. Everyone agrees that there is little formal training in peer-reviewing, and many times it takes years until scientists become thorough and efficient reviewers (Zimmerman et al., 2011). Personally, I prefer to invest some of my time learning from other people’s experiences and best practices. For example, before my first review, I spent approximately two weeks researching material on how to deliver good reviews. But what are good reviews? And mostly, how does one write good reviews efficiently? In this blog post, I will attempt to synthesize some of the material I’ve come across, and share my personal guidelines that help me get through the peer-review process.

1. What is the peer-review process?

The peer-review process is in some ways like a legal trial. A judge (the editor) will take an informed and educated decision about the case (to publish/not to publish the manuscript in current form) based on the recommendations and arguments brought forward by lawyers (reviewers).

In reality, the journey of a manuscript (McPeek et al., 2009) through the peer-review process is as follows:

  1. The manuscript is submitted to a scientific journal.
  2. An editor reads the abstract/paper and decides whether it is suitable for potential publication at the journal.
  3. If approved, it is assigned to an associate editor who will handle the actual review process.
  4. The associate editor then compiles a list of potential reviewers, often partially based on recommendations from authors.
  5. Those reviewers are asked whether they would be willing to review the paper in a timely fashion.
  6. The reviewers then read the paper, consider the methods, data, analyses, and arguments, and write reviews containing their opinions about the paper, and whether the paper should be published in that journal.
  7. The associate editor reads the reviews, usually two or more, and may write a third review, and makes a recommendation to the editor.
  8. The editor then writes the authors a letter about the disposition of the paper.
  9. Depending on the outcome, the manuscript will have successfully exited this journey (accepted for publication), or will have to restart the process again (by resubmitting a modified version to the same or a new journal).

 

Basically, each manuscript submission depends on the work of volunteers (editors, reviewers) (McPeek et al., 2009). Indeed, peer-review, which lies at the cornerstone of advancing science, is primarily a volunteering exercise (also referred as “community work”). But this process is such a valuable mechanism to improve the quality and accuracy of scientific papers, that some people think the system would collapse without it, as there would be little control on what gets published.

What’s the purpose of it?

The goals of peer-review are clear: to ensure the accuracy and improve the quality of published literature through constructive criticism. Hames (2008) points out that every peer-review process should aim to:

  • Prevent the publication of bad work.
  • Verify that the research was conducted correctly, and there are no flaws in the design or methodology.
  • Ensure that the work is reported correctly and unambiguously, with acknowledgement to the existing literature.
  • Ensure that the results have been interpreted correctly and all possible interpretations were considered.
  • Ensure that the results are neither too preliminary nor too speculative.
  • Provide editors with evidence to make judgments as to whether articles meet the criteria for their particular publications.
  • Generally improve the quality and readability of a publication.
2. Who gets to do it?

The editor decides who should complete the review based on recommendations from the manuscript authors (nowadays, it’s a submission requirement for most journals), relevant literature cited in manuscript, or their own professional networks. The reviewers are generally considered experts in a given field (extensive peer-review and/or publishing experience), qualified (typically awarded with a Ph.D.) and able to perform reasonably impartial reviews. For example, because I am an early career scientist, I have been asked to perform reviews mostly on topics relating to my Ph.D. work (both the methods and the science).

3. What do you get out of it?

While the peer-review process seems more of an obligation for the greater good of science and society, it has its own perks (albeit not many), and one can benefit from them:

  • Whichever way you look at it, the peer-review process will improve your scientific work. On one hand, authors receive valuable feedback from experts in their community. On the other hand, reviewers get to reflect on what constitutes high-quality science and incorporate lessons learned from reviewing into their own work.
  • Following the above point, reviewing does make you a better writer. You learn so many lessons from reading excellent to bad manuscripts, and in particular, you learn how to/not to write. It is very frustrating when you struggle with someone else’s unclear explanations or weak arguments. In my experience, that definitely makes you promise yourself to avoid those errors.
  • Reviewing trains your critical thinking, impartial judgment, and diplomatic skills. A review is not useful if it is not civil and contains personal or destructive criticism. In general, you learn to clearly argument your points and be diplomatic about the strengths and weaknesses of a paper.
  • You get to see the latest work before it is even published. Do make sure, though, that you respect the integrity of the review process and do not communicate any aspect of the paper to other people. In any case, this helps you stay on top of your field or expand/learn new science.
  • It does look good on your CV! Everyone agrees that community service (such as reviewing) is a positive for those aspiring for an academic career.
  • Most editors are senior scientists, and by entering the reviewers’ network, you become known. Better yet?! You become known as an expert in a certain field. Fortunately (or unfortunately), since academia relies heavily on prestige and reputation, that will pay off in the longer term.
  • Some journals will provide credit for your review, by acknowledging all reviewers once a year, or by awarding exceptional community service at meetings. While this is great for some, many scientists feel it is insufficient for the amount of work involved in reviews. Therefore, in recent years, it is possible to keep a general record of your reviews using ORCID and/or Publons (pay attention that only the journal and the year of each review is made public). This allows scientists to have something of a review index (complementing the usual publication index – e.g., Google Scholar).
4. How to undertake a peer-review?

Before I even started reviewing, I was somewhat familiar with what reviews looked like – I had already received them for my own submitted research papers. One of the first things I noticed is that each of the reviews had different styles: annotated PDFs or text files with line numbers, brief and not so useful/detailed reviews, short or long reviews etc. Since scientists receive little peer-review training, they are also likely to develop their own review style with time and experience. In principle, the style should not matter as long at the review is thorough, clear, and constructive.

There are many ways to complete a review, but just like writing an article, if you have a plan and structure, the entire process becomes easier and even more enjoyable. Moreover, establishing some good practices will ensure a robust review done in a timely manner. Plus, if you are like me (writing is not my favourite activity), you want to find ways to get the job done, as fast as possible and return to more exciting tasks. I find that checklists come in useful, whether it is about academic writing, presentations or reviewing.

Therefore, what I will attempt in this blog is to create a Peer-Review Checklist that anyone can download as a PDF and help them navigate through the review process. It is a checklist I initially created for my own use, and I hope these tips in turn are relevant for first-time/early reviewers that are still in search of their styles. The checklist may also be useful to other experienced reviewers as a refresher. I would like to note that this is a suggested checklist and workflow, and depending on the journal or field, some elements may be different or missing. People should adapt the checklist to suit their needs, personal style, and the journal’s guidelines. I am also happy to receive suggestions (comments below/email), in order to improve the checklist over time.

Before I present the checklist in more detail, I want to highlight some resources that can help anyone improve their reviews:

  • Talk to colleagues, advisors, and friends about the reviewing process.
  • Pay attention to the reviews of your own papers. I actually modeled mine after one of the most constructive reviews I received.
  • Check journal guidelines. Many journals have extensive and good advice available online.
  • Published material (the most useful material that I found, especially Nicholas and Gordon, 2011, Stiller-Reeve et al., 2018a,b, and McPeek et al., 2009):
Peer-Review Workflow and Checklist

This checklist has been compiled from the advice of various articles and guides, and personal preferences. The aim is to give early reviewers a quick workflow of questions and tasks (that you can mark as completed) for any manuscript review. By following all the points, anyone can produce a constructive and thorough review in a timely manner.

Step 1: Pre-Read – Received an invitation to review

✓ Read abstract.

✓ Appropriate expertise.
Does my area of expertise and experience qualify me to critically evaluate the manuscript? Sometimes it will fit exactly with your expertise, whereas other times it will only just brush your field. One instance, I accepted a review that implemented a technical novelty in a method that I was familiar with. I decided it was still largely within my expertise, but I took the opportunity to learn something new. I made sure I went over the background studies, until I was comfortable asking questions and clarifying points. If you feel less confident, and your expertise allows you to comment meaningfully only on key sections of the paper, you can offer to review these areas and let the editor know you cannot comment on other aspects outside your expertise.

✓ Conflict of interest.
Can I provide a fair and unbiased review of this work? Am I able to evaluate the manuscript with an open mind, without being either negatively/positively predisposed? Check the journal’s guidelines for more specific guidance on avoiding conflicts of interest.

✓ Time and deadline.
Do I have time to write a complete review? Most journals suggest a timeline of a couple of weeks from the moment the invitation was accepted (usually 2-4 weeks). While this may seem sufficient time to return the review, most scientists have a large workload, and end up allowing only a few days for the review. Moreover, it can take more than 8 hours to provide a thoughtful, thorough, and well-referenced review (can depend on the paper type of course, so also pay attention to that). If you are unable to meet the deadline, contact the journal so that the editors can determine the appropriate course of action (some extensions can be granted at the discretion of the editors).

✓ Check journal guidelines and adjust your workflow.
It is better to do this early on in the review.

✓ Respond as soon as possible: Accept/Decline.
Explain to editor the reason for decline, and offer, if possible, suggestions for other reviewers.

Step 2: First Read – Gaining an overview

✓ Set up the structure of review
Prepare a document (I prefer to have a simple text file at hand) containing the following structure of the review:

R0. Review details
R1. Introduction (3 paragraphs)
R2. Major issues (numbered items)
R3. Minor issues (indicate line, figure, table numbers)
R4. Other suggestions (regarding supplementary material, etc.)
Notes (not included in final review)

✓ Read the entire paper. Take notes as you go
The first reading is to get an overall impression of the paper: motivation, approach, overview of results, and conclusions. Take some notes as you go. I usually like to print a copy and make annotations as I go along. However, don’t worry too much with corrections, spelling, punctuation, or references. It’s supposed to be a ‘casual’ reading (kidding).It might be a challenge, but at this point do your best to understand the paper. Some papers read (and are written) better than others, and it would be a shame to miss an interesting study, just because of language barriers. And it is perfectly normal (apparently!) to struggle reading a scientific article with “ultra-congested and aggressively bland” text. This article might help and amuse you: the 10 Stages of Reading a Scientific Paper.

✓ Go through all figures and tables
Do they complement the approach, results section, and conclusions?

✓ Readability
Sometimes it cannot be helped but to ask: is the English/writing so bad that you can’t understand the arguments? If the manuscript needs copyediting by a proficient English speaker before you can evaluate it on its scientific merits, it is legitimate to make such a suggestion to the editor at this stage. You can point out that you cannot give the paper a fair review in its current form, and suggest the paper to be withdrawn from review until the English is improved.

✓ Identify goals, method, findings, and relevance
The questions below might help:

  • What is the main question addressed by the research?
  • Is this question interesting and important to the field of study? How, specifically, will the paper contribute to the science?
  • Do the Abstract and Introduction clearly identify the need for this research, and its relevance?
  • Does the Method target the main question(s) appropriately?
  • Are the Results presented clearly and logically, and are they justified by the data provided?
  • Are the figures clear and fully described?
  • Do the Conclusions justifiably respond to the main questions posed by the author(s) in the Introduction?
  • Is the paper within the scope of the journal?
  • Is the paper potentially publishable based on its contribution to the field?

✓  Write introductory paragraphs (Section R1 – first 2 paragraphs) [“The study investigates/uses/finds/contributes”]
Answering the above questions will help you start the written review. In general, the most helpful review to everyone is one that first provides an overall summary of the main contributions of the paper and its appropriateness for the journal, and suggests what major items should be addressed in revision. This summary can also help you reveal what this paper is really about, if you weren’t sure until now. Or you might end up, writing back “It was difficult to understand the precise point(s) the authors were trying to make.”

The first paragraph should state the main question addressed by the research, and summarize the goals, approaches, and conclusions of the paper. Try writing one sentence for each of these points. The second paragraph of the review should provide a conceptual overview of the contribution of the paper to the journal. Some people suggest trying to also include here the positive aspects in which the paper succeeds, since there is enough space for negative aspects for the remainder of the review. The authors will have a sense of what they have done well, and will not be too discouraged.

✓  Evaluate whether the manuscript is publishable/or not (Section R1 – 3rd paragraph)

[“I recommend the manuscript not/to be published in Journal X with minor/major modifications, and I provide below the reason for my decision and some comments that are necessary to address….”]

You have three decision options: the manuscript is/has

  1. publishable in principle -> Continue review to Step 3: Second Read.
  2. major flaws, but addressable -> Return manuscript to authors for corrections, but document and substantiate the flaws, indicate willingness to provide full review if authors address them (continuing to Step 3: Second Read may still be helpful to reply to editor/authors).
  3. fatally flawed/unsuitable -> Reject, but document and substantiate why. You consider the manuscript is flawed in a way that cannot be fixed and/or is unsuitable for publication in the target journal (high impact journals like Nature or Science reject most submissions solely based on the suitability of study to the journal).

Some manuscripts can have flaws that cannot be overlooked or improved easily. Examples of such fatal flaws might include drawing a conclusion that is contravened by the author’s own statistical evidence, the use of a discredited method, or ignoring a process that is known to have a strong influence on the system under study. Whatever the decision, remember to carefully explain your reasoning and provide clear evidence for it (including citations from other scientific publications).

Assuming there is no fatal flaw, you can continue to a second reading. Personally, I like to let the paper sit for a couple of days after the First Read, and let my mind digest the information. It will be surprising, but you allow some time for your brain to synthesize the major aspects (strengths and weaknesses) of the paper, and you want to focus primarily on those aspects in the next stage.

Step 3: Second Read – The science (major/minor points)

✓  Take detailed Notes (end of review file) indicating section, line, figure, and table numbers

Read the manuscript in detail from start to finish. Pay attention to assumptions, methods, underlying theoretical frameworks, and the conclusions drawn and how well they are supported. Refer to figures and tables when referenced in the text, making sure that the text and the graphics support rather than repeat each other, use your careful study of the figures at the end of the first reading to avoid too much disruption to the flow of your assessment.

I have found it useful to dump in the review file all the comments I have (brainstorming as I re-read the manuscript), including specific comments, thoughts or issues I want to return to. Indicate the line or figure numbers for all comments. There is a reason why most journals ask authors to add numbers to their submissions: to be specific about various comments and suggestions. For example, “line 189 contradicts the statement in line 20”, “paragraph 45-52 is unclear and convoluted, should be rephrased”, “Figure 2a needs X,Y labels”, etc. Plus, most of the review will have the important details written by the time you are done with the second reading. Another tip, it helps to classify your comments as major or minor flaws. Major flaws will need considerable time to explain or correct.

Note: some journals allow reviews as annotated PDFs – I found they are not that helpful because in the Reply-to-reviewers as manuscript author, I had to transcribe many of those comments again. Plus, a single read of the paper might not give enough insight into the strengths/weaknesses.

A sub-checklist:

  • Check every section individually (my preferred order): Introduction, Methods, Results, Discussion, Conclusions, Abstract, Other (e.g., Key points, Appendices). Make notes also on structure and flow of arguments.
  • Check method (i.e., equations, the experimental setup, data collection, details needed for reproducing results, and if that is not possible, is it stated why?).
  • Check all figures and tables, so that you understand all units, axes, and symbols. Do the figures reflect the main text?
  • Check References/referencing is done correctly.
  • Check any supplementary material.
  • Remind yourself the journal’s guidelines. Most importantly, does the manuscript comply with the journal’s data policy and best practices?

✓  Identify major and minor points (Sections R2 and R3)

Now it’s time to organize all those notes and comments. I usually sort them in 2 categories: major and minor issues (Sections R2, R3 of review). In general, the minor issues (e.g., line 21 – missing reference to the referred study, line 32 – sentence not clear, line 56 – typos) do not need further work at this stage.

Major points, on the other hand, require some work. First, organize major points clearly and logically, using separate numbered paragraphs or bullets to make each point clearly stand out. Make use of your numbered notes to provide evidence. It is the reviewer’s obligation to point out the weaknesses in the underlying science. If the methods are suspect, or if the authors over-interpret the data, or if they overlook important implications of their work, or more analyses are needed to support the conclusions, you should point that out as major points.

Is it possible to have too many major points? Could it be because they are not that major (overestimation of importance), or are they really major and cannot be overlooked? This might make you re-evaluate the review (major/minor, largely flawed). In general, I was not given more/found more than 10 major points in a manuscript, but exceptions can happen. Very importantly, try to advise the authors with concrete, actionable ways to address the problems.

✓ Add Other Points (Section R4 – Optional)

If there is anything else to add to the review, neither fitting the category of major nor minor points, such as suggestions for future work, add them at the end of review. If you have no further comments, it’s fine to leave this section empty.

Step 4: Final Read – The writing and formulation

Briefly read through the paper a third time, looking for organizational issues, and finalize the review.

✓ Check organization and flow of arguments

While you probably already noted down many of such issues (because if the manuscript is poorly written, then the arguments will often not make sense either), it’s still a good idea to quickly go over the writing and presentation (section headings, details of language and grammar). Suggest ways how to make the story more cohesive and easily reasoned.

However, was the paper hard to read because the paragraphs did not flow together? Did the authors use excessive and confusing acronyms or jargon? In these cases, I actually include improving the structure of the manuscript as a major point. However, do not feel obligated to catch every typo, missing reference, and awkward phrase – your scientific assessment of the paper is more important.

✓ Read and polish your own review
Read the review carefully, and preferably aloud, imagining you are the editor or the authors of the study. What’s the tone of your arguments? How would you feel receiving it back as the author? Will you find the review helpful and constructive? Or fair? This will draw your attention to how your criticisms might sound to the ears of the authors. Make sure to keep the tone civil and include both positive and negative comments.

✓ Upload your review using the link provided
I usually copy and paste my review (Sections R1-R4) in the provided boxes by the journal, or upload the polished review file.

✓ Answer specific questions regarding the manuscript and its presentationYou will probably also be asked specific questions or rate the manuscript on various attributes (answers in drop-down selections).

✓ Remarks to the editors
Any issues that the editors should be aware of can be indicated separately in Remarks to the editors, which remain confidential.

✓ Submit review to editor
You are done! You will probably hear back from the editor about their decision to accept or reject the manuscript. Important to understand, is that the editors take the final decision. Your role was only advisory in the whole process. However, you may be asked to review another version of the manuscript to assess whether the manuscript has been modified sufficiently in response to reviewers’ comments.

5. Etiquette of reviewing

I hope by now you have a clear idea of what constitutes the peer-review process and how to perform a review. Again, there are many ways how to undertake a review, but maybe you will find the checklist useful. It most surely requires considerable time and effort, but the checklist allows me to be confident that I gave my best consideration for the work submitted. By also ticking off tasks, I can perform the review in an efficient way, without worrying that I forgot something.

However, I am aware that we tend to be over-critical of other peoples work (and the workflow proposed here is quite lengthy). I’ve heard a couple of times the comment that “young scientists provide very lengthy and harsh reviews”. That has some grain of truth in it, as from a desire of being thorough, we might ask for extra-work for revisions that go beyond the scope of the manuscript or resources (time, material, etc.). We need to be aware of that, but at the same time invite authors to discuss potential avenues for the work.

What I will discuss next, is the etiquette of reviewing: what to do/not to do, fairness of reviews, providing/receiving criticism, and the Golden Rule of reviewing. As you will see below, they are interconnected with each other.

5.1 What to do/not to do when peer-reviewing

Top 3 To do:

– the review does not have to be long, but make sure you did a thorough and fair review, as you would want others to do it in return.

– be critical, argumentative, and straightforward: explain the problem, why it’s a problem, and suggest a solution.

– finish before the agreed upon deadline.

Top 3 Don’t do:

– be sarcastic, dismissive or other such tones. The review should be constructive and not offensive.

– be biased or let personal prejudices influence your assessment of the manuscript (e.g., poor English, excessive self-citations). In such cases, it’s better to decline to review and explain potential conflict of interest.

– write a too short review (even if it’s a great study). The authors might be happy to hear that, but the editor will not find it useful.

5.2 Working towards a transparent and fair peer-review process 

Scientific peer-review is regularly criticized as being ineffective, broken or unfair. However, journals are generally committed to take active steps in order to ensure the fairness of the process. For example, most journals have clear ethical guidelines (i.e., AGU, EGU), and all participants in the review process are expected to uphold these guidelines.

While most of the review interaction happens privately between the authors-editors-reviewers, some journals (e.g., EGU journals, Nature Communications) have taken a step forward to make the peer review process more transparent, such that manuscript authors are given the option to publish the peer review history of their paper. This is great for making the process more open and fair! However, making things public can, in some cases, create unethical practices. For example, to ensure the impartiality and confidentiality of the peer review process, you should not discuss your review of the paper with anyone before or after publication. Also, apparently revealing yourself as a reviewer to the author or authors after review might create the wrong impression, as if you’re asking for favourable treatment in the future.

This last aspect brings the questions: “Should we reveal ourselves as reviewers or not?” and “Anonymous or signed review?” (see this perspective and another one). A senior editor and the author of Geoverbiage, Judy Totman Parrish, says that whether you sign your reviews is a personal decision, and that she has always signed her reviews in order to ensure the transparency, and the free exchange of ideas. She also says that she’s never experienced any backlash for any of the reviews she wrote.

I write my reviews anonymously, for exactly the same goal as above: to work towards a fair peer-review process. I find that there are many instances in which biases can form during the peer-review process (i.e., based on gender, age, nationality, but also experience, affiliation, or even prestige/prominent names) (see this article). My personal take on these issues, is that I believe reviews should be double-anonymous (authors do not know who the reviewers are, and reviewers do not know who the authors are), and the review history should be made public. I think this would reduce biases (probably not completely, as some authors/research groups can still be identified by the work submitted), while a transparent review history could ensure the fairness and civility of the review process. Also, with the rise of review statistics (ORCID and/or Publons), one can still be acknowledged for the work performed, without having to sign their names. This might not be possible in some fields (i.e., medicine or other fields where ethical guidelines are stricter), but in geophysics and geodynamics this shouldn’t be a problem.

5.3 Providing and taking criticism

This section might seem out of place for this blog post, but imagine for a moment that you are an author, and you’ve just put a lot of work to write the best paper so far. Your co-authors have read and re-read the paper, generating multiple improved versions with their comments. You finally submit the manuscript for review and, after a seemingly a long amount of time, you get the reviews back. Would you take the delivered message(s) as intended, or be hurt by it? With time you learn to not take things personally, but it’s unavoidable not to feel the tiniest bit affected by major criticism for the study you’ve worked so hard on.

The peer-review process puts you at the other end of writing papers. Therefore, I think scientists need to try their best to provide and receive constructive criticism, and identify and avoid destructive criticism (usually, directed at a person). What helps is to ask yourself: “Is it fair point? Could I use it to make a better version of the manuscript?”. If the answer is ‘yes’, then take the comment and use it to improve your work.

5.4 The Golden Rule

I would like to finish with the Golden Rule of reviewing. In their interesting read, McPeek et al. (2009) suggests reviewers to perform reviews with this in mind: “Review for others as you would have others review for you”.

I think as a more general rule, we can use some ancient wisdom: “Don’t do to others what you wouldn’t want done to yourself!”. It goes for reviewing and many aspects of life.

References:

Hames, I., (2008), Peer review and manuscript management in scientific journals: guidelines for good practice. John Wiley & Sons, https://onlinelibrary.wiley.com/doi/book/10.1002/9780470750803

McPeek, M.A., DeAngelis, D.L., Shaw, R.G., Moore, A.J., Rausher, M.D., Strong, D.R., Ellison, A.M., Barrett, L., Rieseberg, L., Breed, M.D., Sullivan, J., Osenberg, C.W., Holyoak, M., and Elgar, M.A., (2009), The Golden Rule of Reviewing, The American Naturalist, Vol. 173, No. 5, 155-158, https://www.journals.uchicago.edu/doi/10.1086/598847

Nicholas, K.A., and Gordon, W. (2011), A quick guide to Writing a solid peer review, EOS, Vol. 92, No. 28,  https://sites.agu.org/publications/files/2013/01/PeerReview_Guide.pdf

Stiller-Reeve et al. (2018), A peer review process guide, https://www.scisnack.com/wp-content/uploads/2018/10/A-Peer-Review-Process-Guide.pdf

Stiller-Reeve et al. (2018), How to write a thorough peer review, Nature, doi: 10.1038/d41586-018-06991-0,nhttps://www.nature.com/articles/d41586-018-06991-0

Zimmerman, N., R. Salguero-Gomez, and J. Ramos (2011), The next generation of peer reviewing, Front. Ecol. Environ., 9(4), 199, doi:10.1890/1540-9295-9.4.199, https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1890/1540-9295-9.4.199

Presentation skills – 2. Speech

Presentation skills – 2. Speech

Presenting: some people love it, some people hate it. I firmly place myself in the first category and apparently, this presentation joy translates itself into being a good – and confident – speaker. Over the years, quite a few people have asked me for my secrets to presenting (which – immediate full disclosure – I do not have) and this is the result: a running series on the EGU GD Blog that covers my own personal tips and experience in the hope that it will help someone (you?) become a better and – more importantly – more confident speaker. Last time, we discussed your presentation voice. In this second instalment, I discuss everything related to how you speak.

1. Get rid of ‘uh’

Counting the number of times a speaker says ‘uh’ during a presentation is a fun game, but ideally you would like your audience to focus on the non-uh segments of your talk. Therefore, getting rid of ‘uh’ (or any other filler word for that matter) is important. I have two main tips to get rid of ‘uh’:

Write down your speech and practice (but don’t hold on to it religiously)

Practice. Practice. And practice it again. Maybe a few more times. Almost… no: practice it again.
I am being serious here. If you know exactly what you want to say, you won’t hesitate and fill that moment of hesitation with a prolonged uuuuuhhh. The added benefit of writing down your presentation and practising it religiously is that it will help you with timing your presentation as well. I also find it helpful to read through it (instead of practising it out loud) when I am in a situation that doesn’t allow me to go into full presentation mode (on the plane to AGU for example). However, make sure to practise your presentation out loud even though you wrote it all down: thinking speed (or reading in your head) and talking speed are not the same!

If you write down your presentation, and you know exactly what you want to say, you have to take care to evade another (new) pitfall for saying ‘uh’: now that you know exactly what you want to say and how to say it most efficiently, you start saying ‘uh’ when you can’t remember the exact wording. Let it go. Writing down your speech helps you to clarify the vocabulary needed for your speech, but if you don’t say the exact sentences, just go with something else. You will have a well thought out speech anyway. Just go with the flow and try not to say ‘uh’.

The second main tip for getting rid of ‘uh’ is to

Realise that it is okay to stay silent for a while

If you forget the word you wanted to say and you need some time to think, you can take a break. You can stay silent. You don’t need to fill up the silence with ‘uh’. In fact, a break often seems more natural. Realise that you forgot something, don’t panic, take a breath, take a break (don’t eat a KitKat at this point in your presentation), and then continue when you know what to say again. Even if you don’t forget the exact words or phrasings, taking a breath and pausing in your narrative can be helpful for your audience to take a breath as well. It will seem as if your presentation is relaxed: you are not rushing through 50 slides in 12 minutes. You are prepared, you are in control, you can even take a break to take a breath.

2. Speed

A lot of (conference) presentations will have a fixed time. At the big conferences, like EGU and AGU, you get 12 minutes and not a second more or less. Well, of course you can talk longer than 12 minutes, but this will result in less (if any) time for questions.

I don’t think the conveners will kill you, but don’t pin me down on it

And on top of that, everyone (well, me at the very least) will be annoyed at you for not sticking to the time.

So: sticking to your time limit is important!

But how can you actually do this? Well, there are a few important factors:
1. Preparation: know exactly what you want to say (we will cover this more in a later instalment of this series)
2. The speed at which you speak.

We will be discussing the latter point in this blog entry. For me (and many other people), I know I can stick to the rule of “one slide per minute”, but I always have a little buffer in that I count the title slide as a slide as well. So, my 12-minute long presentation would have 12 slides in total (including the title slides). This actually spreads my 12 minutes over 11 scientific slides, so I can talk a little bit longer about each slide. It also gives me piece of mind to know that I have a bit of extra time. However, the speed at which you talk might be completely different. Therefore, the most important rule about timing your presentations is:

Knowing how fast you (will) speak

I always practice my short presentations a lot. If they are 30 minutes or longer, I like to freewheel with the one slide per minute rule. But for shorter presentations, I require a lot of practice. I always time every presentation attempt and make a point of finishing each attempt (even if the first part goes badly). Otherwise you run the risk of rehearsing the first part of your presentation very well, and kind of forgetting about the second part. When I time my presentation during practice, I always speak too long. For a 12 minute presentation, I usually end up at the 13.5 minute mark. However, I know that when I speak in front of an audience, I (subconsciously?) speed up my speech, so when I time 13.5 minutes, I know that my actual presentation will be a perfect 12 minutes.

The only way to figure out how you change or start to behave in front of an audience is by simply giving a lot of presentations. Try to do that and figure out whether you increase or decrease the speed of your speech during your talk. Take note and remember it for the next time you time your presentation. In the end, presenting with skill and confidence is all about knowing yourself.

3. Articulation and accent

There are as many accents to be heard at a conference as there are scientists talking. Everyone has there own accent, articulation, (presentation) voice, etc. This means that

You should not feel self-conscious about your accent

Some accents are stronger than others and may be more difficult for others to follow. Native speakers are by no means necessarily better speakers and depending on whom you ask, their accent might also not be better than anyone else’s.
Of course your accent might become an issue if people can’t understand you. You can try and consider the following things to make yourself understandable for a big audience:
1. Articulate well.
2. Adapt the speed at which you talk

Some languages are apparently faster than others. French is quite fast for example, whereas (British) English is a slower language. You have to take this into account when switching languages. If you match the pace of the language you are speaking, your accent will be less noticeable, because you avoid any ingrained rythm patterns that are language specific. Then you might still have your accent shine through in your pronunciation of the words, but it will not shine through in the rhythm of your speech.
In addition, you can consider asking a native speaker for help if you are unsure of how to pronounce certain words. Listening or watching many English/American/Australian tv series/films/youtube will also help with your pronunciation.

And that, ladies and gentlemen, is about everything I have to say on the matter of speech. You should now have full control over your presentation voice and all the actual words you are going to say. Next time, we go one step further and discuss your posture during the presentation and your movements.

It’s just coding … – Scientific software development in geodynamics

The Spaghetti code challenge. Source: Wikimedia Commons, Plamen petkov 92, CC-BY-SA 4.0

As big software packages become a commonplace in geodynamics, which skills should a geodynamicist aim at having in software development? Which techniques should be considered a minimum standard for our software? This week Rene Gassmöller, project scientist at UC Davis, Computational Infrastructure for Geodynamics, shares his insights on the best practices to make scientific software better, and how we can work to translate these into our field. Enjoy the read!

Rene  Gassmöller

Nowadays we often equate geodynamics with computational geodynamics. While there are still interesting analytical studies to be made, and important data to be gathered, it is increasingly common that PhD students in geodynamics are expected to work exclusively on data interpretation, computational models, and in particular the accompanying development of geodynamic software packages. But as it turns out, letting an unprepared PhD student (or unprepared postdoc or faculty member for that matter) work on a big software package is a near guarantee for the project to develop into a sizeable bowl of spaghetti code (see figure above for a representative illustration).

Note, that I intentionally write about ‘software packages’ instead of ‘code’, as many of these packages — think of Gplates (Müller et al, 2018), ObsPy (Krischer et al, 2015), FeniCS (Alneas et al, 2015) , or the project I am working on, ASPECT (Heister et al, 2017) — have necessarily left the stage of a quickly written ‘code’ for a single purpose, and developed into multi-purpose tools with a complex internal structure. With this growing complexity, the activity of scientific ‘coding’ evolved into ‘developing software’. However, when students enter the field of geophysics, they are rarely prepared for this challenge. Hannay et al. (2009) report that while researchers typically spend 30% or more of their time developing software, 90% of them are primarily self-taught, and only few of them received formal training for writing software, including tests and documentation. Nobody told them: Programming and engineering software are two very different things. Many undergraduate and graduate geoscience curricula today include classes about the basics of programming (e.g. in Python, R, or Matlab), and also discuss numerical and computational methods. While these concepts are crucial for solving scientific problems, they are not sufficient for managing the complexity of growing scientific software. Writing a 50-line script is a very different task from contributing to an inherited and poorly documented PhD project of 1,000 lines, which again is very different from managing a multi-developer project of 100,000 lines of source code. A recurring theme is that these differences are only discovered when damage has already been done. Hannay et al. (2009) note:

Codes often start out small and only grow large with time as the software proves its usefulness in scientific investigations. The demand for proper software engineering is therefore seldom visible until it is “too late”.

But what are these ‘proper software engineering techniques’?

Best practices vs. Best techniques in practice

In a previous blog post, Krister Karlsen already discussed the value of version control systems for reproducibility of computational research. It is needless to say that these systems (originally also termed source code control systems, e.g. Rochkind, 1975) are just as valuable for scientific software development as they are for reproducibility of results. However, they are not sufficient for developing reliable scientific software. Wilson et al. (2014) summarize a list of 8 best practices that make scientific software better:

  1. Write programs for people, not computers.
    • A program should not require its readers to hold more than a handful of facts in memory at once.
    • Make names consistent, distinctive, and meaningful.
    • Make code style and formatting consistent.
  2. Let the computer do the work.
    • Make the computer repeat tasks.
    • Save recent commands in a file for re-use.
    • Use a build tool to automate workflows.
  3. Make incremental changes.
    • Work in small steps with frequent feedback and course correction.
    • Use a version control system.
    • Put everything that has been created manually in version control.
  4. Don’t repeat yourself (or others).
    • Every piece of data must have a single authoritative representation in the system.
    • Modularize code rather than copying and pasting.
    • Re-use code instead of rewriting it.
  5. Plan for mistakes.
    • Add assertions to programs to check their operation.
    • Use an off-the-shelf unit testing library.
    • Turn bugs into test cases.
    • Use a symbolic debugger.
  6. Optimize software only after it works correctly.
    • Use a profiler to identify bottlenecks.
    • Write code in the highest-level language possible.
  7. Document design and purpose, not mechanics.
    • Document interfaces and reasons, not implementations.
    • Refactor code in preference to explaining how it works.
    • Embed the documentation for a piece of software in that software.
  8. Collaborate.
    • Use pre-merge code reviews.
    • Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems.
    • Use an issue tracking tool.

There is a lot to be said about each of these techniques, but that would be beyond the scope of this blog post (please see Wilson et al.’s excellent and concise paper if you are interested). What I would like to emphasize here is that these techniques are often requested, but rarely taught. What are peer code reviews? How do I gradually introduce tests and refactor a legacy code? Who knows if it is better to use unit testing, integration testing, regression testing, or benchmarking for a given change of the code? And do I really need to know the difference? After all, a common argument against using software development techniques in applied computational science disciplines boils down to:

  • We can not expect these software development techniques from geodynamicists.
  • We should not employ the same best practices as Google, Amazon, Apple, because they do not apply to us.
  • There is no time to learn/apply these techniques, because we have to conduct our research, write our publications, secure our funding.

While from a philosophical standpoint it is easy to dismiss these statements as not adhering to best practices, and possibly impacting the reliability of the created software, it is harder to tackle them from a practical perspective. Of course it is true that implementing a sophisticated testing infrastructure for a one-line shell command is neither useful nor necessary. Maybe the same is true for a 20 line script that is written to specifically convert one dataset into another, but in this case putting it under version control would already be useful in order to record your process and apply it to other datasets. And from my own experience it is extraordinarily easy to miss the threshold at 40-100 lines at which writing documentation and implementing first testing procedures become crucial to avoid cursing yourself in the future for not explaining what you did and why you did it. So why are there detailed instructions for lab notes and experimental procedures, but not for geodynamic software design and reliability of scientific software? Geoscience, chemistry, and physics have established multi-semester lab and field exercises, to drill students towards careful scientific analysis. Should we develop comparable exercises for scientific software development (beyond numerical methods and basic programming)? How would an equivalent of these classes look like for computational methods? And is there a point where the skills of software development and geodynamics research grow so far apart we have to consider them separately and establish a unique career track, such as the Research Software Engineer that is becoming more popular in the UK?

In my personal opinion we have made great progress over the last years in defining best practices for scientific software (see e.g. https://software.ac.uk/resources/online-sustainability-evaluation, or https://geodynamics.org/cig/dev/best-practices/). However, it is still considered a personal task to acquire the necessary skills and to find the correct balance between careful engineering and overdesigning software. Establishing courses and resources that discuss these questions could greatly benefit our community, and allow for a more reliable scientific progress in geodynamics.

Collaborative software development – The overlooked social challenge

The contributor funnel. The atmosphere and usability of a project influence how many users will join a project, how long they stick around, and if they will take responsibility for the project by contributing to it or eventually become maintainers. Credit: https://opensource.guide/

Now that we covered every topic a scientist can learn about scientific software development in a single blog post, what can go wrong when you put several of them together to work on a software package? Needless to say, a lot. No matter if your software project is a closed-source, intra-workgroup project, or an open-source project with users and developers spread over different continents, things are going to get exponentially more complicated the more people work on your software. Not only does discussion and interaction take more time, there will also be conflicting ideas about computational methods, software design, or implementation. Using state-of-the-art tools like collaborative development platforms (Github, Gitlab, Bitbucket, pick your favourite) and modern discussion channels like chats (Slack, Gitter), forums (Discourse), or video conferences (Skype, Hangouts, Zoom) can alleviate a part of the communication barriers. But ultimately, the social challenges remain. How does a project decide between competing goals of flexibility and performance? Who is going to enforce a code of conduct in a project to keep the development environment open and friendly? Does a project create a welcoming atmosphere that invites new contributions, or does it repel newcomers by unrealistic standards and inappropriate behavior? How should maintainers of scientific software deal with unrealistic feature requests by users? How to encourage new users to become contributors and take responsibility for the software they benefit from? How to compromise or combine providing improvements to the upstream project versus publishing them as scientific papers? How to provide credit to contributors?

In my opinion it is unfortunate that these questions about scientific software projects are even less discussed than the (now increasing) awareness of reproducibility. On the bright side, there is already a trove of experiences in the open-source community. The same questions about attribution and credit, collaboration and community-management, and correctness and security have been discussed over the past decades in open-source projects all over the world, and nowadays a good number of resources provide guidance, such as https://opensource.guide/, or the excellent book  ‘How to Run a Successful Free Software Project’ (Fogel, 2017). Not all of it can be transferred to science, but we would waste time and energy to dismiss these experiences and instead repeat their mistakes.

Let us talk about engineering scientific software

I realize that in this blog post I opened more questions than I answered. Maybe that is because I am not aware of the answers that are already out there. But maybe it is also caused by a lack of attention that these questions receive. I feel that there are no established guidelines for which software development skills a geodynamicist should have, and what techniques should be considered a minimum standard for our software. If that is the case, I would invite you to have a discussion about it. Maybe we can agree on a set of guidelines and improve the state of software in geodynamics. But at the very least I hope I inspired some thought about the topic, and provided some resources to learn more about a discussion that will likely grow more important over the coming years.

References:

M. S. Alnaes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes and G. N. Wells. The FEniCS Project Version 1.5. Archive of Numerical Software, vol. 3, 2015, http://dx.doi.org/10.11588/ans.2015.100.20553.

Fogel, K. (2017). Producing Open Source Software: How to Run a Successful Free Software Project. O'Reilly Media, 2nd edition.

Hannay, J. E., MacLeod, C., Singer, J., Langtangen, H. P., Pfahl, D., & Wilson, G. (2009). How do scientists develop and use scientific software?. In Proceedings of the 2009 ICSE workshop on Software Engineering for Computational Science and Engineering (pp. 1-8). IEEE Computer Society.

Heister, T., Dannberg, J., Gassmöller, R., & Bangerth, W. (2017). High accuracy mantle convection simulation through modern numerical methods–II: realistic models and problems. Geophysical Journal International, 210(2), 833-851.

Krischer, L., Megies, T., Barsch, R., Beyreuther, M., Lecocq, T., Caudron, C., & Wassermann, J. (2015). ObsPy: A bridge for seismology into the scientific Python ecosystem. Computational Science & Discovery, 8(1), 014003.

Müller, R.D., Cannon, J., Qin, X., Watson, R.J., Gurnis, M., Williams, S., Pfaffelmoser, T., Seton, M., Russell, S.H. & Zahirovic, S. (2018). GPlates–Building a Virtual Earth Through Deep Time. Geochemistry, Geophysics, Geosystems.

Open Source Guides. https://opensource.guide/. Oct, 2018.

Rochkind, M. J. (1975). The source code control system. IEEE transactions on Software Engineering, (4), 364-370.

Wilson, G., Aruliah, D.A., Brown, C.T., Hong, N.P.C., Davis, M., Guy, R.T., Haddock, S.H., Huff, K.D., Mitchell, I.M., Plumbley, M.D. and Waugh, B. (2014). Best practices for scientific computing. PLoS biology, 12(1), e1001745.