GD
Geodynamics
Grace Shephard

Guest

Grace is a postdoctoral researcher at the Centre for Earth Evolution and Dynamics (CEED) at the University of Oslo, Norway. She works on linking plate tectonic reconstructions and mantle structure, especially in the Arctic and Pacific regions. Grace is part of the GD blog team as an Editor. You can reach Grace via email.

GD Guide to EGU19

GD Guide to EGU19

With this year’s EGU General Assembly (GA; #EGU19) looming in less than a week, it’s time for all attendees to finish (or start) their own scientific contributions, create their own personal programs as well as plan other activities during the conference. In this blog Nico Schliffke (GD ECS Rep) would like to share some useful advice how to successfully navigate through the conference and highlight relevant activities, both scientific and social, for Geodynamics Early Career Scientists (ECS).

The huge variety of scientific contributions (~18,000 at EGU18) might seem intimidating to begin with and makes it impossible for any individual to keep track of everything. To be well prepared for the conference, allow for a bit of time to create your own personal programme by logging in with your account details and search for relevant sessions, keywords, authors, friends or any other fields of interest. If you have found anything interesting, add it to your personal programme by ticking the ‘star’. After completing your personal programme you can print your own timetable or open it in the EGU 2019 app.

Besides all the (specific) scientific content of the GA, EGU19 offers a wide spread of exciting workshops and short courses to boost your personal and career skills, as well great debates, union wide events and division social events. Below you will find a list of highlight events, special ECS targeted events, social events and other things to keep in mind and to make the best of EGU19:

For first time attendees:

How to navigate the EGU: tips and tricks (Mon, 08:30 – 10:15, Room -2.16) – This workshop is led by several EGU ECS representatives and will give an overview of procedures during EGU as well as useful tips and tricks how to successfully navigate the GA.

GD workshops and short courses:

Geodynamics 101A: Numerical methods (Thur, 14:00-15:45, Room -2.62) Building on last year’s short course, we are happy to announce two short courses this year as a part of the ’Solid Earth 101’ series together with Seismology 101 and Geology 101. The first course deals with the basic concepts of numerical modelling, including discretisation of governing equations, building models, benchmarking (among others).

Geodynamics 101B: Large-scale dynamical processes (Fri, 14:00-15:45, Room -2.62)  The second short course will discuss the applications of geodynamical modelling. It will cover a state-of-art overview of main large-scale dynamics on Earth (mantle convection, continental breakup, subduction dynamics, crustal deformation..) but also discuss constraints coming from seismology (tomography) or the geological record.

Geology 101: The (hi)story of rocks (Tue, 14:00 – 15:45, Room -2.62)The complementary workshop in the 101 series: Find more about structural and petrological processes on Earth. It’s definitely worth knowing, otherwise why should we be doing many of these Geodynamical models?

Seismology 101 (Wed, 14:00 – 15:45, Room -2.62)The second complementary workshop in the 101 series. Many geodynamical models are based on observations using seismological methods. Find out more about earthquakes, beachballs and what semiologists are actually measuring – this is essential for any numerical or analogue geodynamical model!

GD related award ceremonies and lectures:

Arne Richter Award for Outstanding ECS Lecture by Mathew Domeier (Tue, 12:00-12:30 Room -2.21) – The Arne Richter award is an union-wide award for young scientists. We are happy to see that Mathew as a Geodynamicist has won the medal this year! Come along and listen to his current research.

Augustus Love Medal Lecture by Anne Davaille (Thur, 14:45-15:45, Room D1) – Listen to the exciting work of the first female winner of the Augustus Love Medal (the GD division award), Anne Davaille! She is specialised on experimental and analytical fluid dynamics which has given Geodynamics many new insights.

 Arthur Holmes Medal Lecture by Jean Braun  (Tue, 12:45-13:45, Room E1) – This one of the most prestigious EGU award for solid Earth geosciences. Jean is a geodynamicist from Potsdam and works on integrating surface and lithospheric dynamics into numerical models.

 

 

GD division social activities:

ECS GD informal lunch  (Mon, 12:30-14:00) – Come and meet the ECS team behind these GD activities! Meet in front of the conference center (look for “GD” stickers), to head to the food court in Kagran (2 subway stops away from the conference center, opposite direction to city centre).

ECS GD dinner (Wed, 19:30-22:00) – Join us for a friendly dinner at a traditional Viennese ‘Heurigen’ with fellow ECS Geodynamicists at Gigerl – Rauhensteingasse 3, Wien 1. Bezirk!  If you would like to attend the ECS GD dinner on Wednesday, please fill out this form to keep track on the number of people: https://docs.google.com/forms/d/e/1FAIpQLScpi8gvDDMOOOjLbtq4BrElsoBtTv86Mud7qNQ5yl7qWP5cUA/viewform  Remember to bring some cash to pay for your own food and drinks!

GD/TS/SM drinks (Wed, after ECS GD dinner) – Don’t worry if you cannot make for the ECS GD dinner! After dinner we’ll have a 5 min walk to Bermuda Bräu – Rabensteig 6, 1010 Wien for some drinks together with ECS from Seismology (SM) and Tectonics/Structural (TS), so you can meet us there too!  

GD Division meeting (Fri, 12:45-13:45 Room D2) – Elections and reports from the division president, ECS representative and other planning in GD related matters. Lunch provided!

Meet the division president of Geodynamics (Paul Tackley) and the ECS representative (Nico Schliffke) (Wed, 11:45-12:30, EGU Booth) – Come and discuss with the president and ECS rep about any GD related issues, suggestions or remarks.

Geodynamicists eating lunch at Kagran – it’s tradition by now.

EGU wide social activities:

Networking and ECS Zone (all week – red area)This area is dedicated to early career scientist all week and provides space to chillout, get your well deserved coffee or find out more about ECS related announcements.

Opening reception (Sun, 18:30 – 21:00, Foyer F) – Don’t miss out on many new faces and friends, as well as free food and drinks and the opening (ice-breaker) reception! There will also be a ECS corner to meet fellow young scientists, especially if it’s your first EGU.

EGU Award Ceremony (Wed, 17:30 – 20:00, Room E1) – All EGU medallists will receive their award at this ceremony.

ECS Forum (Wed, 12:45 – 13:45, Room L2)An open discussion on any ECS topic

ECS Networking and Careers Reception (by invitation only) (Tue, 19:00-20:30, Room F2)

Conveners’ reception (by invitation only) (Fri 19:30 – 0:00, Foyer F) 

Credit: Kai Boggild (distributed via imaggeo.egu.eu)

Great debates

Science in policymaking: Who is responsible?  (Mon, 10:45 – 12:30, Room E1) – Actively take part in one of the presently most important and hot topic!

How can Early Career Scientists prioritise their mental wellbeing? (Tue, 19:00 – 20:30, Room E1) – Many ECS find it challenging to prioritise their mental wellbeing. Discuss with many other young scientist how to tackle this really important issue and maybe learn helpful tips how to improve your own wellbeing! 

Other useful skills to polish your career/CV:

Help! I’m presenting at a scientific conference (Mon, 14:00 –15:45, Room -2.62) – Your first conference talk might be daunting. Find out best practices and tips how to create a concise and clear conference talk.

How to share your research with citizens and why it’s so important (Mon, 14:00-15:45, Room -2.16) – Do you share your research with the public? Can you explain in simple matters? An important topic for researchers currently!

How to make the most of your PhD or postdoc experience for getting your next job in academia (Tue, 16:15 – 18:00, Room -2.85) – It’s never too early to plan your next career step.

How to convene and chair a session at the General Assembly (Tue, 08:30-10:15, Room -2.85) – Find out what it needs to convene a session of short course at EGU. You may be surprised, but you could to it next year if you liked,

How to peer-review? (Mon, 16:15 -18:00, Room -2.85) – After the end of a PhD (or sometimes even earlier!) you may be asked to peer-review journal contributions, but hardly anyone knows the process beforehand.

How to find funding and write a research grant (Tue, 10:45-12:30, Room -2.16) – One of the major tasks when you finish your PhDs. It might even be useful when writing applications for travel support etc.

Funding opportunities: ERC grants (Tue, 12:45-13:45, Room 0.14) – Find out more about these generous grants and how to successfully apply for them

How to apply for the Marie Sklodowska-Curie grants (Wed, 12:45-13:45, Room 0.14)

Balancing work and personal life as a scientist (Wed, 16:15 – 18:00, Room -2.85) – Find out how not to lose sight of your hobbies and personal life in a increasingly competitive academic environment. 

Other interesting events:

Academia is not the only route (Thu, 10:45-12:30, Room -2.16) – Are you finishing your degree and not overly excited by an academic future? Try this short course on exploring career alternatives both inside and outside academia

Games for Geoscience (Wed, 16:15-18:00 (Talks) in Room L8 and 14:00-15:45 (Posters), Hall X4) – Games are more fun than work! Learn more on how to use games for communication, outreach and much more. 

Unconscious bias (Wed, 12:45-13:45, Room -2.32) – Become aware of the obstacles that some of your colleagues face every day, and that might prevent them from doing the best science

Promoting and supporting equality of opportunities in geosciences (Thu, 14:00-18:00, Room E1) – Any of us should promote an open, equal opportunity working environment and this session promises some very interesting talk on common issues, solutions and initiatives.

What I’ve learned from teaching geosciences in prisons – (Thu, 14:00-15:45, Hall X4 – Poster) by GD ECS Phil Heron.

Rhyme Your Research (Tue, 14:00 – 15:45, Room -2.16) – Reveal the poet in you and explain your research in an interesting and unusual way!

This is just a small list of possible activities during EGU19, and I’m sure to have missed out many more. So keep your eyes and ears open for additional events and spread the word if you know anything of particular interest. Also make sure you follow the GD Blog, our social media (EGU GD Facebook page) and EGU Twitter, to keep updated with any more information during the week! The official hashtag is #EGU19. All the best for EGU and I am looking forward to meeting many of you there!

 

Demystifying the Peer-Review Process

Demystifying the Peer-Review Process

Adina Pusok

An important and inevitable aspect of being in academia is receiving a request to peer-review a paper. And much like the papers we write and submit, retaining structure and clarity for the review itself is important. This week Adina E. Pusok, Postdoctoral Researcher at Scripps Institution of Oceanography, UCSD, and our outgoing GD ECR representative, shares some detailed and helpful tips for writing a concise, efficient, and informative review. Do check out her very helpful Peer-review checklist PDF for download!

 

It is somewhat surprising that the peer-review process, a fundamental part of science (which prides itself on technical and objective methods), is usually left up to the individual reviewing scientist. Everyone agrees that there is little formal training in peer-reviewing, and many times it takes years until scientists become thorough and efficient reviewers (Zimmerman et al., 2011). Personally, I prefer to invest some of my time learning from other people’s experiences and best practices. For example, before my first review, I spent approximately two weeks researching material on how to deliver good reviews. But what are good reviews? And mostly, how does one write good reviews efficiently? In this blog post, I will attempt to synthesize some of the material I’ve come across, and share my personal guidelines that help me get through the peer-review process.

1. What is the peer-review process?

The peer-review process is in some ways like a legal trial. A judge (the editor) will take an informed and educated decision about the case (to publish/not to publish the manuscript in current form) based on the recommendations and arguments brought forward by lawyers (reviewers).

In reality, the journey of a manuscript (McPeek et al., 2009) through the peer-review process is as follows:

  1. The manuscript is submitted to a scientific journal.
  2. An editor reads the abstract/paper and decides whether it is suitable for potential publication at the journal.
  3. If approved, it is assigned to an associate editor who will handle the actual review process.
  4. The associate editor then compiles a list of potential reviewers, often partially based on recommendations from authors.
  5. Those reviewers are asked whether they would be willing to review the paper in a timely fashion.
  6. The reviewers then read the paper, consider the methods, data, analyses, and arguments, and write reviews containing their opinions about the paper, and whether the paper should be published in that journal.
  7. The associate editor reads the reviews, usually two or more, and may write a third review, and makes a recommendation to the editor.
  8. The editor then writes the authors a letter about the disposition of the paper.
  9. Depending on the outcome, the manuscript will have successfully exited this journey (accepted for publication), or will have to restart the process again (by resubmitting a modified version to the same or a new journal).

 

Basically, each manuscript submission depends on the work of volunteers (editors, reviewers) (McPeek et al., 2009). Indeed, peer-review, which lies at the cornerstone of advancing science, is primarily a volunteering exercise (also referred as “community work”). But this process is such a valuable mechanism to improve the quality and accuracy of scientific papers, that some people think the system would collapse without it, as there would be little control on what gets published.

What’s the purpose of it?

The goals of peer-review are clear: to ensure the accuracy and improve the quality of published literature through constructive criticism. Hames (2008) points out that every peer-review process should aim to:

  • Prevent the publication of bad work.
  • Verify that the research was conducted correctly, and there are no flaws in the design or methodology.
  • Ensure that the work is reported correctly and unambiguously, with acknowledgement to the existing literature.
  • Ensure that the results have been interpreted correctly and all possible interpretations were considered.
  • Ensure that the results are neither too preliminary nor too speculative.
  • Provide editors with evidence to make judgments as to whether articles meet the criteria for their particular publications.
  • Generally improve the quality and readability of a publication.
2. Who gets to do it?

The editor decides who should complete the review based on recommendations from the manuscript authors (nowadays, it’s a submission requirement for most journals), relevant literature cited in manuscript, or their own professional networks. The reviewers are generally considered experts in a given field (extensive peer-review and/or publishing experience), qualified (typically awarded with a Ph.D.) and able to perform reasonably impartial reviews. For example, because I am an early career scientist, I have been asked to perform reviews mostly on topics relating to my Ph.D. work (both the methods and the science).

3. What do you get out of it?

While the peer-review process seems more of an obligation for the greater good of science and society, it has its own perks (albeit not many), and one can benefit from them:

  • Whichever way you look at it, the peer-review process will improve your scientific work. On one hand, authors receive valuable feedback from experts in their community. On the other hand, reviewers get to reflect on what constitutes high-quality science and incorporate lessons learned from reviewing into their own work.
  • Following the above point, reviewing does make you a better writer. You learn so many lessons from reading excellent to bad manuscripts, and in particular, you learn how to/not to write. It is very frustrating when you struggle with someone else’s unclear explanations or weak arguments. In my experience, that definitely makes you promise yourself to avoid those errors.
  • Reviewing trains your critical thinking, impartial judgment, and diplomatic skills. A review is not useful if it is not civil and contains personal or destructive criticism. In general, you learn to clearly argument your points and be diplomatic about the strengths and weaknesses of a paper.
  • You get to see the latest work before it is even published. Do make sure, though, that you respect the integrity of the review process and do not communicate any aspect of the paper to other people. In any case, this helps you stay on top of your field or expand/learn new science.
  • It does look good on your CV! Everyone agrees that community service (such as reviewing) is a positive for those aspiring for an academic career.
  • Most editors are senior scientists, and by entering the reviewers’ network, you become known. Better yet?! You become known as an expert in a certain field. Fortunately (or unfortunately), since academia relies heavily on prestige and reputation, that will pay off in the longer term.
  • Some journals will provide credit for your review, by acknowledging all reviewers once a year, or by awarding exceptional community service at meetings. While this is great for some, many scientists feel it is insufficient for the amount of work involved in reviews. Therefore, in recent years, it is possible to keep a general record of your reviews using ORCID and/or Publons (pay attention that only the journal and the year of each review is made public). This allows scientists to have something of a review index (complementing the usual publication index – e.g., Google Scholar).
4. How to undertake a peer-review?

Before I even started reviewing, I was somewhat familiar with what reviews looked like – I had already received them for my own submitted research papers. One of the first things I noticed is that each of the reviews had different styles: annotated PDFs or text files with line numbers, brief and not so useful/detailed reviews, short or long reviews etc. Since scientists receive little peer-review training, they are also likely to develop their own review style with time and experience. In principle, the style should not matter as long at the review is thorough, clear, and constructive.

There are many ways to complete a review, but just like writing an article, if you have a plan and structure, the entire process becomes easier and even more enjoyable. Moreover, establishing some good practices will ensure a robust review done in a timely manner. Plus, if you are like me (writing is not my favourite activity), you want to find ways to get the job done, as fast as possible and return to more exciting tasks. I find that checklists come in useful, whether it is about academic writing, presentations or reviewing.

Therefore, what I will attempt in this blog is to create a Peer-Review Checklist that anyone can download as a PDF and help them navigate through the review process. It is a checklist I initially created for my own use, and I hope these tips in turn are relevant for first-time/early reviewers that are still in search of their styles. The checklist may also be useful to other experienced reviewers as a refresher. I would like to note that this is a suggested checklist and workflow, and depending on the journal or field, some elements may be different or missing. People should adapt the checklist to suit their needs, personal style, and the journal’s guidelines. I am also happy to receive suggestions (comments below/email), in order to improve the checklist over time.

Before I present the checklist in more detail, I want to highlight some resources that can help anyone improve their reviews:

  • Talk to colleagues, advisors, and friends about the reviewing process.
  • Pay attention to the reviews of your own papers. I actually modeled mine after one of the most constructive reviews I received.
  • Check journal guidelines. Many journals have extensive and good advice available online.
  • Published material (the most useful material that I found, especially Nicholas and Gordon, 2011, Stiller-Reeve et al., 2018a,b, and McPeek et al., 2009):
Peer-Review Workflow and Checklist

This checklist has been compiled from the advice of various articles and guides, and personal preferences. The aim is to give early reviewers a quick workflow of questions and tasks (that you can mark as completed) for any manuscript review. By following all the points, anyone can produce a constructive and thorough review in a timely manner.

Step 1: Pre-Read – Received an invitation to review

✓ Read abstract.

✓ Appropriate expertise.
Does my area of expertise and experience qualify me to critically evaluate the manuscript? Sometimes it will fit exactly with your expertise, whereas other times it will only just brush your field. One instance, I accepted a review that implemented a technical novelty in a method that I was familiar with. I decided it was still largely within my expertise, but I took the opportunity to learn something new. I made sure I went over the background studies, until I was comfortable asking questions and clarifying points. If you feel less confident, and your expertise allows you to comment meaningfully only on key sections of the paper, you can offer to review these areas and let the editor know you cannot comment on other aspects outside your expertise.

✓ Conflict of interest.
Can I provide a fair and unbiased review of this work? Am I able to evaluate the manuscript with an open mind, without being either negatively/positively predisposed? Check the journal’s guidelines for more specific guidance on avoiding conflicts of interest.

✓ Time and deadline.
Do I have time to write a complete review? Most journals suggest a timeline of a couple of weeks from the moment the invitation was accepted (usually 2-4 weeks). While this may seem sufficient time to return the review, most scientists have a large workload, and end up allowing only a few days for the review. Moreover, it can take more than 8 hours to provide a thoughtful, thorough, and well-referenced review (can depend on the paper type of course, so also pay attention to that). If you are unable to meet the deadline, contact the journal so that the editors can determine the appropriate course of action (some extensions can be granted at the discretion of the editors).

✓ Check journal guidelines and adjust your workflow.
It is better to do this early on in the review.

✓ Respond as soon as possible: Accept/Decline.
Explain to editor the reason for decline, and offer, if possible, suggestions for other reviewers.

Step 2: First Read – Gaining an overview

✓ Set up the structure of review
Prepare a document (I prefer to have a simple text file at hand) containing the following structure of the review:

R0. Review details
R1. Introduction (3 paragraphs)
R2. Major issues (numbered items)
R3. Minor issues (indicate line, figure, table numbers)
R4. Other suggestions (regarding supplementary material, etc.)
Notes (not included in final review)

✓ Read the entire paper. Take notes as you go
The first reading is to get an overall impression of the paper: motivation, approach, overview of results, and conclusions. Take some notes as you go. I usually like to print a copy and make annotations as I go along. However, don’t worry too much with corrections, spelling, punctuation, or references. It’s supposed to be a ‘casual’ reading (kidding).It might be a challenge, but at this point do your best to understand the paper. Some papers read (and are written) better than others, and it would be a shame to miss an interesting study, just because of language barriers. And it is perfectly normal (apparently!) to struggle reading a scientific article with “ultra-congested and aggressively bland” text. This article might help and amuse you: the 10 Stages of Reading a Scientific Paper.

✓ Go through all figures and tables
Do they complement the approach, results section, and conclusions?

✓ Readability
Sometimes it cannot be helped but to ask: is the English/writing so bad that you can’t understand the arguments? If the manuscript needs copyediting by a proficient English speaker before you can evaluate it on its scientific merits, it is legitimate to make such a suggestion to the editor at this stage. You can point out that you cannot give the paper a fair review in its current form, and suggest the paper to be withdrawn from review until the English is improved.

✓ Identify goals, method, findings, and relevance
The questions below might help:

  • What is the main question addressed by the research?
  • Is this question interesting and important to the field of study? How, specifically, will the paper contribute to the science?
  • Do the Abstract and Introduction clearly identify the need for this research, and its relevance?
  • Does the Method target the main question(s) appropriately?
  • Are the Results presented clearly and logically, and are they justified by the data provided?
  • Are the figures clear and fully described?
  • Do the Conclusions justifiably respond to the main questions posed by the author(s) in the Introduction?
  • Is the paper within the scope of the journal?
  • Is the paper potentially publishable based on its contribution to the field?

✓  Write introductory paragraphs (Section R1 – first 2 paragraphs) [“The study investigates/uses/finds/contributes”]
Answering the above questions will help you start the written review. In general, the most helpful review to everyone is one that first provides an overall summary of the main contributions of the paper and its appropriateness for the journal, and suggests what major items should be addressed in revision. This summary can also help you reveal what this paper is really about, if you weren’t sure until now. Or you might end up, writing back “It was difficult to understand the precise point(s) the authors were trying to make.”

The first paragraph should state the main question addressed by the research, and summarize the goals, approaches, and conclusions of the paper. Try writing one sentence for each of these points. The second paragraph of the review should provide a conceptual overview of the contribution of the paper to the journal. Some people suggest trying to also include here the positive aspects in which the paper succeeds, since there is enough space for negative aspects for the remainder of the review. The authors will have a sense of what they have done well, and will not be too discouraged.

✓  Evaluate whether the manuscript is publishable/or not (Section R1 – 3rd paragraph)

[“I recommend the manuscript not/to be published in Journal X with minor/major modifications, and I provide below the reason for my decision and some comments that are necessary to address….”]

You have three decision options: the manuscript is/has

  1. publishable in principle -> Continue review to Step 3: Second Read.
  2. major flaws, but addressable -> Return manuscript to authors for corrections, but document and substantiate the flaws, indicate willingness to provide full review if authors address them (continuing to Step 3: Second Read may still be helpful to reply to editor/authors).
  3. fatally flawed/unsuitable -> Reject, but document and substantiate why. You consider the manuscript is flawed in a way that cannot be fixed and/or is unsuitable for publication in the target journal (high impact journals like Nature or Science reject most submissions solely based on the suitability of study to the journal).

Some manuscripts can have flaws that cannot be overlooked or improved easily. Examples of such fatal flaws might include drawing a conclusion that is contravened by the author’s own statistical evidence, the use of a discredited method, or ignoring a process that is known to have a strong influence on the system under study. Whatever the decision, remember to carefully explain your reasoning and provide clear evidence for it (including citations from other scientific publications).

Assuming there is no fatal flaw, you can continue to a second reading. Personally, I like to let the paper sit for a couple of days after the First Read, and let my mind digest the information. It will be surprising, but you allow some time for your brain to synthesize the major aspects (strengths and weaknesses) of the paper, and you want to focus primarily on those aspects in the next stage.

Step 3: Second Read – The science (major/minor points)

✓  Take detailed Notes (end of review file) indicating section, line, figure, and table numbers

Read the manuscript in detail from start to finish. Pay attention to assumptions, methods, underlying theoretical frameworks, and the conclusions drawn and how well they are supported. Refer to figures and tables when referenced in the text, making sure that the text and the graphics support rather than repeat each other, use your careful study of the figures at the end of the first reading to avoid too much disruption to the flow of your assessment.

I have found it useful to dump in the review file all the comments I have (brainstorming as I re-read the manuscript), including specific comments, thoughts or issues I want to return to. Indicate the line or figure numbers for all comments. There is a reason why most journals ask authors to add numbers to their submissions: to be specific about various comments and suggestions. For example, “line 189 contradicts the statement in line 20”, “paragraph 45-52 is unclear and convoluted, should be rephrased”, “Figure 2a needs X,Y labels”, etc. Plus, most of the review will have the important details written by the time you are done with the second reading. Another tip, it helps to classify your comments as major or minor flaws. Major flaws will need considerable time to explain or correct.

Note: some journals allow reviews as annotated PDFs – I found they are not that helpful because in the Reply-to-reviewers as manuscript author, I had to transcribe many of those comments again. Plus, a single read of the paper might not give enough insight into the strengths/weaknesses.

A sub-checklist:

  • Check every section individually (my preferred order): Introduction, Methods, Results, Discussion, Conclusions, Abstract, Other (e.g., Key points, Appendices). Make notes also on structure and flow of arguments.
  • Check method (i.e., equations, the experimental setup, data collection, details needed for reproducing results, and if that is not possible, is it stated why?).
  • Check all figures and tables, so that you understand all units, axes, and symbols. Do the figures reflect the main text?
  • Check References/referencing is done correctly.
  • Check any supplementary material.
  • Remind yourself the journal’s guidelines. Most importantly, does the manuscript comply with the journal’s data policy and best practices?

✓  Identify major and minor points (Sections R2 and R3)

Now it’s time to organize all those notes and comments. I usually sort them in 2 categories: major and minor issues (Sections R2, R3 of review). In general, the minor issues (e.g., line 21 – missing reference to the referred study, line 32 – sentence not clear, line 56 – typos) do not need further work at this stage.

Major points, on the other hand, require some work. First, organize major points clearly and logically, using separate numbered paragraphs or bullets to make each point clearly stand out. Make use of your numbered notes to provide evidence. It is the reviewer’s obligation to point out the weaknesses in the underlying science. If the methods are suspect, or if the authors over-interpret the data, or if they overlook important implications of their work, or more analyses are needed to support the conclusions, you should point that out as major points.

Is it possible to have too many major points? Could it be because they are not that major (overestimation of importance), or are they really major and cannot be overlooked? This might make you re-evaluate the review (major/minor, largely flawed). In general, I was not given more/found more than 10 major points in a manuscript, but exceptions can happen. Very importantly, try to advise the authors with concrete, actionable ways to address the problems.

✓ Add Other Points (Section R4 – Optional)

If there is anything else to add to the review, neither fitting the category of major nor minor points, such as suggestions for future work, add them at the end of review. If you have no further comments, it’s fine to leave this section empty.

Step 4: Final Read – The writing and formulation

Briefly read through the paper a third time, looking for organizational issues, and finalize the review.

✓ Check organization and flow of arguments

While you probably already noted down many of such issues (because if the manuscript is poorly written, then the arguments will often not make sense either), it’s still a good idea to quickly go over the writing and presentation (section headings, details of language and grammar). Suggest ways how to make the story more cohesive and easily reasoned.

However, was the paper hard to read because the paragraphs did not flow together? Did the authors use excessive and confusing acronyms or jargon? In these cases, I actually include improving the structure of the manuscript as a major point. However, do not feel obligated to catch every typo, missing reference, and awkward phrase – your scientific assessment of the paper is more important.

✓ Read and polish your own review
Read the review carefully, and preferably aloud, imagining you are the editor or the authors of the study. What’s the tone of your arguments? How would you feel receiving it back as the author? Will you find the review helpful and constructive? Or fair? This will draw your attention to how your criticisms might sound to the ears of the authors. Make sure to keep the tone civil and include both positive and negative comments.

✓ Upload your review using the link provided
I usually copy and paste my review (Sections R1-R4) in the provided boxes by the journal, or upload the polished review file.

✓ Answer specific questions regarding the manuscript and its presentationYou will probably also be asked specific questions or rate the manuscript on various attributes (answers in drop-down selections).

✓ Remarks to the editors
Any issues that the editors should be aware of can be indicated separately in Remarks to the editors, which remain confidential.

✓ Submit review to editor
You are done! You will probably hear back from the editor about their decision to accept or reject the manuscript. Important to understand, is that the editors take the final decision. Your role was only advisory in the whole process. However, you may be asked to review another version of the manuscript to assess whether the manuscript has been modified sufficiently in response to reviewers’ comments.

5. Etiquette of reviewing

I hope by now you have a clear idea of what constitutes the peer-review process and how to perform a review. Again, there are many ways how to undertake a review, but maybe you will find the checklist useful. It most surely requires considerable time and effort, but the checklist allows me to be confident that I gave my best consideration for the work submitted. By also ticking off tasks, I can perform the review in an efficient way, without worrying that I forgot something.

However, I am aware that we tend to be over-critical of other peoples work (and the workflow proposed here is quite lengthy). I’ve heard a couple of times the comment that “young scientists provide very lengthy and harsh reviews”. That has some grain of truth in it, as from a desire of being thorough, we might ask for extra-work for revisions that go beyond the scope of the manuscript or resources (time, material, etc.). We need to be aware of that, but at the same time invite authors to discuss potential avenues for the work.

What I will discuss next, is the etiquette of reviewing: what to do/not to do, fairness of reviews, providing/receiving criticism, and the Golden Rule of reviewing. As you will see below, they are interconnected with each other.

5.1 What to do/not to do when peer-reviewing

Top 3 To do:

– the review does not have to be long, but make sure you did a thorough and fair review, as you would want others to do it in return.

– be critical, argumentative, and straightforward: explain the problem, why it’s a problem, and suggest a solution.

– finish before the agreed upon deadline.

Top 3 Don’t do:

– be sarcastic, dismissive or other such tones. The review should be constructive and not offensive.

– be biased or let personal prejudices influence your assessment of the manuscript (e.g., poor English, excessive self-citations). In such cases, it’s better to decline to review and explain potential conflict of interest.

– write a too short review (even if it’s a great study). The authors might be happy to hear that, but the editor will not find it useful.

5.2 Working towards a transparent and fair peer-review process 

Scientific peer-review is regularly criticized as being ineffective, broken or unfair. However, journals are generally committed to take active steps in order to ensure the fairness of the process. For example, most journals have clear ethical guidelines (i.e., AGU, EGU), and all participants in the review process are expected to uphold these guidelines.

While most of the review interaction happens privately between the authors-editors-reviewers, some journals (e.g., EGU journals, Nature Communications) have taken a step forward to make the peer review process more transparent, such that manuscript authors are given the option to publish the peer review history of their paper. This is great for making the process more open and fair! However, making things public can, in some cases, create unethical practices. For example, to ensure the impartiality and confidentiality of the peer review process, you should not discuss your review of the paper with anyone before or after publication. Also, apparently revealing yourself as a reviewer to the author or authors after review might create the wrong impression, as if you’re asking for favourable treatment in the future.

This last aspect brings the questions: “Should we reveal ourselves as reviewers or not?” and “Anonymous or signed review?” (see this perspective and another one). A senior editor and the author of Geoverbiage, Judy Totman Parrish, says that whether you sign your reviews is a personal decision, and that she has always signed her reviews in order to ensure the transparency, and the free exchange of ideas. She also says that she’s never experienced any backlash for any of the reviews she wrote.

I write my reviews anonymously, for exactly the same goal as above: to work towards a fair peer-review process. I find that there are many instances in which biases can form during the peer-review process (i.e., based on gender, age, nationality, but also experience, affiliation, or even prestige/prominent names) (see this article). My personal take on these issues, is that I believe reviews should be double-anonymous (authors do not know who the reviewers are, and reviewers do not know who the authors are), and the review history should be made public. I think this would reduce biases (probably not completely, as some authors/research groups can still be identified by the work submitted), while a transparent review history could ensure the fairness and civility of the review process. Also, with the rise of review statistics (ORCID and/or Publons), one can still be acknowledged for the work performed, without having to sign their names. This might not be possible in some fields (i.e., medicine or other fields where ethical guidelines are stricter), but in geophysics and geodynamics this shouldn’t be a problem.

5.3 Providing and taking criticism

This section might seem out of place for this blog post, but imagine for a moment that you are an author, and you’ve just put a lot of work to write the best paper so far. Your co-authors have read and re-read the paper, generating multiple improved versions with their comments. You finally submit the manuscript for review and, after a seemingly a long amount of time, you get the reviews back. Would you take the delivered message(s) as intended, or be hurt by it? With time you learn to not take things personally, but it’s unavoidable not to feel the tiniest bit affected by major criticism for the study you’ve worked so hard on.

The peer-review process puts you at the other end of writing papers. Therefore, I think scientists need to try their best to provide and receive constructive criticism, and identify and avoid destructive criticism (usually, directed at a person). What helps is to ask yourself: “Is it fair point? Could I use it to make a better version of the manuscript?”. If the answer is ‘yes’, then take the comment and use it to improve your work.

5.4 The Golden Rule

I would like to finish with the Golden Rule of reviewing. In their interesting read, McPeek et al. (2009) suggests reviewers to perform reviews with this in mind: “Review for others as you would have others review for you”.

I think as a more general rule, we can use some ancient wisdom: “Don’t do to others what you wouldn’t want done to yourself!”. It goes for reviewing and many aspects of life.

References:

Hames, I., (2008), Peer review and manuscript management in scientific journals: guidelines for good practice. John Wiley & Sons, https://onlinelibrary.wiley.com/doi/book/10.1002/9780470750803

McPeek, M.A., DeAngelis, D.L., Shaw, R.G., Moore, A.J., Rausher, M.D., Strong, D.R., Ellison, A.M., Barrett, L., Rieseberg, L., Breed, M.D., Sullivan, J., Osenberg, C.W., Holyoak, M., and Elgar, M.A., (2009), The Golden Rule of Reviewing, The American Naturalist, Vol. 173, No. 5, 155-158, https://www.journals.uchicago.edu/doi/10.1086/598847

Nicholas, K.A., and Gordon, W. (2011), A quick guide to Writing a solid peer review, EOS, Vol. 92, No. 28,  https://sites.agu.org/publications/files/2013/01/PeerReview_Guide.pdf

Stiller-Reeve et al. (2018), A peer review process guide, https://www.scisnack.com/wp-content/uploads/2018/10/A-Peer-Review-Process-Guide.pdf

Stiller-Reeve et al. (2018), How to write a thorough peer review, Nature, doi: 10.1038/d41586-018-06991-0,nhttps://www.nature.com/articles/d41586-018-06991-0

Zimmerman, N., R. Salguero-Gomez, and J. Ramos (2011), The next generation of peer reviewing, Front. Ecol. Environ., 9(4), 199, doi:10.1890/1540-9295-9.4.199, https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1890/1540-9295-9.4.199

Tomography and plate tectonics

Tomography and plate tectonics

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. For our first ‘Geodynamics 101’ post for 2019, Assistant Prof. Jonny Wu from the University of Houston explains how to delve into the subduction record via seismic tomography and presents some fascinating 3D workflow images with which to test an identified oceanic slab. 

Jonny Wu, U. Houston

Tomography… wait, isn’t that what happens in your CAT scan? Although the general public might associate tomography with medical imaging, Earth scientists are well aware that ‘seismic tomography’ has enabled us to peer deeper, and with more clarity, into the Earth’s interior (Fig. 1). What are some of the ways we can download and display tomography to inform our scientific discoveries? Why has seismic tomography been a valuable tool for plate reconstructions? And what are some new approaches for incorporating seismic tomography within plate tectonic models?

Figure 1: Tomographic transect across the East Asian mantle under the Eurasian-South China Sea margin, the Philippine Sea and the western Pacific from Wu and Suppe (2018). The displayed tomography is the MITP08 global P-wave model (Li et al., 2008).

Downloading and displaying seismic tomography

Seismic tomography is a technique for imaging the Earth’s interior in 3-D using seismic waves. For complete beginners, IRIS (Incorporated Research Institutions for Seismology) has an excellent introduction that compares seismic tomography to medical CT scans.

A dizzying number of new, high quality seismic tomographic models are being published every year. For example, the IRIS EMC-EarthModels catalogue  currently contains 64 diverse tomographic models that cover most of the Earth, from global to regional scales. From my personal count, at least seven of these models have been added in the past half year – about one new model a month. Aside from the IRIS catalog, a plethora of other tomographic models are also publicly-available from journal data suppositories, personal webpages, or by an e-mail request to the author.

Downloading a tomographic model is just the first step. If one does not have access to custom workflows and scripts to display tomography, consider visiting an online tomography viewer. I have listed a few of these websites at the end of this blog post. Of these websites, a personal favourite of mine is the Hades Underworld Explorer built by Douwe van Hinsbergen and colleagues at Utrecht University, which uses a familiar Google Maps user interface. By simply dragging a left and right pin on the map, a user can display a global tomographic section in real time. The displayed tomographic section can be displayed in either a polar or Cartesian view and exported to a .svg file. Another tool I have found useful are tomographic ‘vote maps’, which provide indications of lower mantle slab imaging robustness by comparison of multiple tomographic models (Shephard et al., 2017). Vote maps can be downloaded from the original paper above or from the SubMachine website (Hosseini et al. (2018); see more in the website list below).

Using tomography for plate tectonic reconstructions

Tomography has played an increasing role in plate tectonic studies over the past decades. A major reason is because classical plate tectonic inputs (e.g. seafloor magnetic anomalies, palaeomagnetism, magmatism, geology) are independent from the seismological inputs for tomographic images. This means that tomography can be used to augment or test classic plate reconstructions in a relatively independent fashion. For example, classical plate tectonic models can be tested by searching tomography for slab-like anomalies below or near predicted subduction zone locations. These ‘conventional’ plate modelling workflows have challenges at convergent margins, however, when the geological record has been significantly destroyed from subduction. In these cases, the plate modeller is forced to describe details of past plate kinematics using an overly sparse geological record.

Figure 2: Tomographic plate modeling workflow proposed by Wu et al. (2016). The final plate model in c) is fully-kinematic and makes testable geological predictions for magmatic histories, terrane paleolatitudes and other geology (e.g. collisions) that can be compared against the remnant geology in d), which are relatively independent.

A ‘tomographic plate modelling’ workflow (Fig. 2) was proposed by Wu et al. (2016) that essentially reversed the conventional plate modelling workflow. In this method, slabs are mapped from tomography and unfolded (i.e. retro-deformed) (Fig. 2a). The unfolded slabs are then populated into a seafloor spreading-based global plate model. Plate motions are assigned in a hierarchical fashion depending on available kinematic constraints (Fig. 2b). The plate modelling will result in either a single unique plate reconstruction, or several families of possible plate models (Fig. 2c). The final plate models (Fig. 2c) are fully-kinematic and make testable geological predictions for magmatic histories, palaeolatitudes and other geological events (e.g. collisions). These predictions can then be systematically compared against remnant geology (Fig. 2d), which are independent from the tomographic inputs (Fig. 2a).

The proposed 3D slab mapping workflow of Wu et al. (2016) assumed that the most robust feature of tomographic slabs is likely the slab center. The slab mapping workflow involved manual picking of a mid-slab ‘curve’ along hundreds (and sometimes thousands!) of variably oriented 2D cross-sections using software GOCAD (Figs. 3a, b). A 3-D triangulated mid-slab surface is then constructed from the mid-slab curves (Fig. 3c). Inspired by 3D seismic interpretation techniques from petroleum geoscience, the tomographic velocities can be extracted along the mid-slab surface for further tectonic analysis (Fig. 3d).


Figure 3: Slab unfolding workflow proposed by Wu et al. (2016) shown for the subducted Ryukyu slab along the northern Philippine Sea plate. The displayed tomography in a), d) and e) is from the MITP08 global P-wave model (Li et al., 2008).

For relatively undeformed upper mantle slabs, a pre-subduction slab size and shape can be estimated by unfolding the mid-slab surface to a spherical Earth model, minimizing distortions and changes to surface area (Fig. 3e). Interestingly, the slab unfolding algorithm can also be applied to shoe design, where there is a need to flatten shoe materials to build cut patterns (Bennis et al., 1991).  The three-dimensional slab mapping within GOCAD allows a self-consistent 3-D Earth model of the mapped slabs to be developed and maintained. This had advantages for East Asia (Wu et al., 2016), where many slabs have apparently subducted in close proximity to each other (Fig. 1).

Web resources for displaying tomography

Hades Underworld Explorer : http://www.atlas-of-the-underworld.org/hades-underworld-explorer/

Seismic Tomography Globe : http://dagik.org/misc/gst/user-guide/index.html

SubMachine : https://www.earth.ox.ac.uk/~smachine/cgi/index.php

 

References

Bennis, C., Vezien, J.-M., Iglesias, G., 1991. Piecewise surface flattening for non-distorted texture mapping. Proceedings of the 18th annual conference on Computer graphics and interactive techniques 25, 237-246.

Hosseini, K. , Matthews, K. J., Sigloch, K. , Shephard, G. E., Domeier, M. and Tsekhmistrenko, M., 2018. SubMachine: Web-Based tools for exploring seismic tomography and other models of Earth's deep interior. Geochemistry, Geophysics, Geosystems, 19. 

Li, C., van der Hilst, R.D., Engdahl, E.R., Burdick, S., 2008. A new global model for P wave speed variations in Earth's mantle. Geochemistry, Geophysics, Geosystems 9, Q05018.

Shephard, G.E., Matthews, K.J., Hosseini, K., Domeier, M., 2017. On the consistency of seismically imaged lower mantle slabs. Scientific Reports 7, 10976.

Wu, J., Suppe, J., 2018. Proto-South China Sea Plate Tectonics Using Subducted Slab Constraints from Tomography. Journal of Earth Science 29, 1304-1318.

Wu, J., Suppe, J., Lu, R., Kanda, R., 2016. Philippine Sea and East Asian plate tectonics since 52 Ma constrained by new subducted slab reconstruction methods. Journal of Geophysical Research: Solid Earth 121, 4670-4741

Reproducible Computational Science

Reproducible Computational Science

 

Krister with his bat-signal shirt for reproducibility.

We’ve all been there – you’re reading through a great new paper, keen to get to the Data Availability only to find nothing listed, or the uninspiring “data provided on request”. This week Krister Karlsen, PhD student from the Centre for Earth Evolution and Dynamics (CEED), University of Oslo shares some context and tips for increasing the reproducibility of your research from a computational science perspective. Spread the good word and reach for the “Gold Standard”!

Historically, computational methods and modelling have been considered the third avenue of the sciences, but they are now some of the most important, paralleling experimental and theoretical approaches. Thanks to the rapid development of electronics and theoretical advances in numerical methods, mathematical models combined with strong computing power provide an excellent tool to study what is not available for us to observe or sample (Fig. 1). In addition to being able to simulate complex physical phenomena on computer clusters, these advances have drastically improved our ability to gather and examine high-dimensional data. For these reasons, computational science is in fact the leading tool in many branches of physics, chemistry, biology, and geodynamics.

Figure 1: Time–depth diagram presenting availability of geodynamic data. Modified from (Gerya, 2014).

A side effect of the improvement of methods for simulation and data gathering is the availability of a vast variety of different software packages and huge data sets. This poses a challenge in terms of sufficient documentation that will allow the study to be reproduced. With great computing power, comes great responsibility.

“Non-reproducible single occurrences are of no significance to science.” – Popper (1959)

Reproducibility is the cornerstone of cumulative science; the ultimate standard by which scientific claims are judged. With replication, independent researchers address a scientific hypothesis and build up evidence for, or against, it. This methodology represents the self-correcting path that science should take to ensure robust discoveries; separating science from pseudoscience. Reports indicate increasing pressure to publish manuscripts whilst applying for competitive grants and positions (Baker, 2016). Furthermore, a growing burden of bureaucracy takes away precious time designing experiments and doing research. As the time available for actual research is decreasing, the number of articles that mention a “reproducibility crisis?” are rising towards the present day peak (Fig. 2). Does this mean we have become sloppy in terms of proper documentation?

Figure 2: Number of titles, abstracts, or keywords that contain one of the following phrases: “reproducibility crisis,” “scientific crisis,” “science in crisis,” “crisis in science,” “replication crisis,” “replicability crisis”, found in the Web of Science records. Modified from (Fanelli, 2018).

Are we facing a reproducibility crisis?

A survey conducted by Nature asked 1,576 researchers this exact question, and reported 52% responded with “Yes, a significant crisis,” and 38% with “Yes, a slight crisis” (Baker, 2016). Perhaps more alarming is that 70% report they have unsuccessfully tried to reproduce another scientist’s findings, and more than half have failed to reproduce their own results. To what degree these statistics apply to our own field of geodynamics is not clear, but it is nonetheless a timely remainder that reproducibility must remain at the forefront of our dissemination. Multiple journals have implemented policies on data and software sharing upon publication to ensure the replication and reproduction of computational science is maintained. But how well are they working? A recent empirical analysis of journal policy effectiveness for computational reproducibility sheds light on this issue (Stodden et al., 2018). The study randomly selected 204 papers published in Science after the implementation of their code and data sharing policy. Of these articles, 24 contained sufficient information, whereas for the remaining 180 publications the authors had to be contacted directly. Only 131 authors replied to the request, of these 36% provided some of the requested material and 7% simply refused to share code and data. Apparently the implementation of policies was not enough, and there is still a lot of confusion among researchers when it comes to obligations related to data and code sharing. Some of the anonymized responses highlighted by Stodden et al. (2018) underline the confusion regarding the data and code sharing policy:

Putting aside for the moment that you are, in many cases, obliged to share your code and data to enhance reproducibility; are there any additional motivating factors in making your computational research reproducible? Freire et al. (2012) lists a few simple benefits of reproducible research:

1. Reproducible research is well cited. A study (Vandewalle et al., 2009) found that published articles that reported reproducible results have higher impact and visibility.

2. Code and software comparisons. Well documented computational research allows software developed for similar purposes to be compared in terms of performance (e.g. efficiency and accuracy). This can potentially reveal interesting and publishable differences between seemingly identical programs.

3. Efficient communication of science between researchers. New-comers to a field of research can more efficiently understand how to modify and extend an existing program, allowing them to more easily build upon recently published discoveries (this is simply the positive counterpart to the argument made against software sharing earlier).

“Replicability is not reproducibility: nor is it good science.” – Drummond (2009)

I have discussed reproducibility over quite a few paragraphs already, without yet giving it a proper definition. What precisely is reproducibility? Drummond (2009) proposes a distinction between reproducibility and replicability. He argues that reproducibility requires, at the minimum, minor changes in experiment or model setup, while replication is an identical setup. In other words, reproducibility refers to a phenomenon that can be predicted to recur with slightly different experimental conditions, while replicability describes the ability to obtain an identical result when an experiment is performed under precisely the same conditions. I think this distinction makes the utmost sense in computational science, because if all software, data, post-processing scripts, random number seeds and so on, are shared and reported properly, the results should indeed be identical. However, replicability does not ensure the validity of the scientific discovery. A robust discovery made using computational methods should be reproducible with a different software (made for similar purposes, of course) and small perturbations to the input data such as initial conditions, physical parameters, etc. This is critical because we rarely, if ever, know the model inputs with zero error bars. A way for authors to address such issues is to include a sensitivity analysis of different parameters, initial conditions and boundary conditions in the publication or the supplementary material section.

Figure 3: Illustration of the “spectrum of reproducibility”, ranging from not reproducible to the gold standard that includes code, data and executable files that can directly replicate the reported results. Modified from (Peng, 2011).

However, the gold standard of reproducibility in computation-involved science, like geodynamics, is often described as what Drummond would classify as replication (Fig. 3). That is, making all data and code available to others to easily execute. Even though this does not ensure reproducibility (only replicability), it provides other researchers a level of detail regarding the work-flow and analysis that is beyond what can usually be achieved by using common language. And this deeper understanding can be crucial when trying to reproduce (and not replicate) the original results. Thus replication is a natural step towards reproduction. Open-source community codes for geodynamics, like eg. ASPECT (Heister et al., 2017), and more general FEM libraries like FEniCS (Logg et al., 2012), allows for friction-free replication of results. An input-file describing the model setup provides a 1-to-1 relation to the actual results1 (which in many cases is reasonable because the data are too large to be easily shared). Thus, sharing the post-processing scripts accompanied by the input file on eg. GitHub, will allow for complete replication of the results, at low cost in terms of data storage.

Light at the end of the tunnel?

In order to improve practices for reproducibility, contributions will need to come from multiple directions. The community needs to develop, encourage and maintain a culture of reproducibility. Journals and funding agencies can play an important role here. The American Geosciences Union (AGU) has shared a list of best practices regarding research data2 associated with a publication:

• Deposit the data in support of your publication in a leading domain repository that handles such data.

• If a domain repository is not available for some of all of your data, deposit your data in a general repository such as Zenodo, Dryad, or Figshare. All of these repositories can assign a DOI to deposited data, or use your institution’s archive.

• Data should not be listed as “available from authors.”

• Make sure that the data are available publicly at the time of publication and available to reviewers at submission—if you are unable to upload to a public repository before submission, you may provide access through an embargoed version in a repository or in datasets or tables uploaded with your submission (Zenodo, Dryad, Figshare, and some domain repositories provide embargoed access.) Questions about this should be sent to journal staff.

• Cite data or code sets used in your study as part of the reference list. Citations should follow the Joint Declaration of Data Citation Principles.

• Develop and deposit software in GitHub which can be cited, or include simple scripts in a supplement. Code in Github can be archived separately and assigned a DOI through Zenodo for submission.

In addition to best practice guidelines, wonderful initiatives from other communities include a research prize. The European College of Neuropsychopharmacology offers a (11,800 USD) award for negative results, more specifically for careful experiments that do not confirm an accepted hypothesis or previous result. Another example is the International Organization for Human Brain Mapping who awards 2,000 USD for the best replication study − successful or not. Whilst not a prize per se, at recent EGU General Assemblies in Vienna the GD community have held sessions around the theme of failed models. Hopefully, similar initiatives will lead by example so that others in the community will follow.

1To the exact same results, information about the software version, compilers, operating system etc. would also typically be needed.

2 AGU’s definition of data includes all code, software, data, methods and protocols used to produce the results here.

References

AGU, Best Practices. https://publications.agu.org/author-resource-center/publication-policies/datapolicy/data-policy-faq/ Accessed: 2018-08-31.

Baker, Monya. Reproducibility crisis? Nature, 533:26, 2016.

Drummond, Chris. Replicability is not reproducibility: nor is it good science. 2009.

Fanelli, Daniele. Opinion: Is science really facing a reproducibility crisis, and do we need it to?Proceedings of the National Academy of Sciences, 115(11):2628–2631, 2018.

Freire, Juliana; Bonnet, Philippe, and Shasha, Dennis. Computational reproducibility: state-of-theart, challenges, and database research opportunities. In Proceedings of the 2012 ACM SIGMOD international conference on management of data, pages 593–596. ACM, 2012.

Gerya, Taras. Precambrian geodynamics: concepts and models. Gondwana Research, 25(2):442–463, 2014.

Heister, Timo; Dannberg, Juliane; Gassm"oller, Rene, and Bangerth, Wolfgang. High accuracy mantle convection simulation through modern numerical methods. II: Realistic models and problems. Geophysical Journal International, 210(2):833–851, 2017. doi: 10.1093/gji/ggx195. URL https://doi.org/10.1093/gji/ggx195.

Logg, Anders; Mardal, Kent-Andre; Wells, Garth N., and others, . Automated Solution of Differential Equations by the Finite Element Method. Springer, 2012. ISBN 978-3-642-23098-1. doi: 10.1007/978-3-642-23099-8.

Peng, Roger D. Reproducible research in computational science. Science, 334(6060):1226–1227, 2011.

Popper, Karl Raimund. The Logic of Scientific Discovery . University Press, 1959.

Stodden, Victoria; Seiler, Jennifer, and Ma, Zhaokun. An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences , 115(11):2584–2589, 2018.

Vandewalle, Patrick; Kovacevic, Jelena, and Vetterli, Martin. Reproducible research in signal processing. IEEE Signal Processing Magazine , 26(3), 2009