GeoLog

publishing

How to increase reproducibility and transparency in your research

How to increase reproducibility and transparency in your research

Contemporary science faces many challenges in publishing results that are reproducible. This is due to increased usage of data and digital technologies as well as heightened demands for scholarly communication. These challenges have led to widespread calls for more research transparency, accessibility, and reproducibility from the science community. This article presents current findings and solutions to these problems, including recent new software that makes writing submission-ready manuscripts for journals of Copernicus Publications a lot easier.

While it can be debated if science really faces a reproducibility crisis, the challenges of computer-based research have sparked numerous articles on new good research practices and their evaluation. The challenges have also driven researchers to develop infrastructure and tools to help scientists effectively write articles, publish data, share code for computations, and communicate their findings in a reproducible way, for example Jupyter, ReproZip and research compendia.

Recent studies showed that the geosciences and geographic information science are not beyond issues with reproducibility, just like other domains. Therefore, more and more journals have adopted policies on sharing data and code. However, it is equally important that scientists foster an open research culture and teach researchers how they adopt more transparent and reproducible workflows, for example at skill-building workshops at conferences offered by fellow researchers, such as the EGU short courses, community-led non-profit organisations such as the Carpentries, open courses for students, small discussion groups at research labs, or individual efforts of self-learning. In the light of prevailing issues of a common definition of reproducibility, Philip Stark, a statistics professor and associate dean of mathematical and physical sciences at the University of California, Berkeley, recently coined the term preproducibility: “An experiment or analysis is preproducible if it has been described in adequate detail for others to undertake it.” The neologism intends to reduce confusion and also to embrace a positive attitude for more openness, honesty, and helpfulness in scholarly communication processes.

In the spirit of these activities, this article describes a modern workflow made possible by recent software releases. The new features allow the EGU community to write preproducible manuscripts for submission to the large variety of academic journals published by Copernicus Publications. The new workflow might require hard-earned adjustments for some researchers, but it pays off because of an increase in transparency and effectivity. This is especially the case for early career scientists. An open and reproducible workflow enables researchers to build on others’ and own previous work and better collaborate on solving the societal challenges of today.

Reproducible research manuscripts

Open digital notebooks, which interweave data and code and can be exported to different output formats such as PDF, are powerful means to improve transparency and preproducibility of research. Jupyter Notebook, Stencila and R Markdown let researchers combine long-form text of a publication and source code for analysis and visualisation in a single document. Having text and code side-by-side makes them easier to grasp and ensures consistency, because each rendering of the document executes the whole workflow using the original data. Caching for long-lasting computations is possible, and researchers working with supercomputing infrastructures or huge datasets may limit the executed code to purposes of visualisation using processed data as input. Authors can transparently expose specific code snippets to readers but also publish the complete source code of the document openly for collaboration and review.

The popular notebook formats are plain text-based, like Markdown in case of R Markdown. Therefore an R Markdown document can be managed with version control software, which are programs for managing multiple versions and contributions, even by different people, to the same documents. Version control provides traceability of authorship, a time machine for going back to any previous “working” version, and online collaboration such as on GitLab. This kind of workflow also stops the madness of using file names for versions yet still lets authors use awesome file names and apply domain-specific guidelines for packaging research.

R Markdown supports different programming languages besides the popular namesake R and is a sensible solution even if you do not analyse data with scripts nor have any code in your scholarly manuscript. It is easy to write, allows you to manage your bibliography effectively, can be used for websites, books or blogs, but most importantly it does not fall short when it is time to submit a manuscript article to a journal.

The rticles extension package for R provides a number of templates for popular journals and publishers. Since version 0.6 (published Oct 9 2018) these templates include the Copernicus Publications Manuscript preparations guidelines for authors. The Copernicus Publications staff was kind enough to give a test document a quick review and all seems in order, though of course any problems and questions shall be directed to the software’s vibrant community and not the publishers.

The following code snippet and screen shot demonstrate the workflow. Lines starting with # are code comments and explain the steps. Code examples provided here are ready to use and only lack the installation commands for required packages.

# load required R extension packages:
library("rticles")
library("rmarkdown")

# create a new document using a template:
rmarkdown::draft(file = "MyArticle.Rmd",
                 template = "copernicus_article",
                 package = "rticles", edit = FALSE)

# render the source of the document to the default output format:
rmarkdown::render(input = "MyArticle/MyArticle.Rmd")

{: .language-r}

The commands created a directory with the Copernicus Publications template’s files, including an R Markdown (.Rmd) file ready to be edited by you (left-hand side of the screenshot), a LaTeX (.tex) file for submission to the publisher, and a .pdf file for inspecting the final results and sharing with your colleagues (right-hand side of the screenshot). You can see how simple it is to format text, insert citations, chemical formulas or equations, and add figures, and how they are rendered into a high-quality output file.

All of these steps may also be completed with user-friendly forms when using RStudio, a popular development and authoring environment available for all operating systems. The left-hand side of the following screenshot shows the form for creating a new document based on a template, and the right-hand shows side the menu for rendering, called “knitting” with R Markdown because code and text are combined into one document like threads in a garment.

And in case you decide last minute to submit to a different journal, rticles supports many publishers so you only have to adjust the template while the whole content stays the same.

Sustainable access to supplemental data

Data published today should be published and properly cited using appropriate research data repositories following the FAIR data principles. Journals require authors to follow these principles, see for example the Copernicus Publications data policy or a recent announcement by Nature. Other publishers required, or still do today, to store supplemental information (SI), such as dataset files, extra figures, or extensive descriptions of experimental procedures, as part of the article. Usually only the article itself receives a digital object identifier (DOI) for long-term identification and availability. The DOI minted by the publisher is not suitable for direct access to supplemental files, because it points to a landing page about the identified object. This landing page is designed to be read by humans but not by computers.

The R package suppdata closes this gap. It supports downloading supplemental information using the article’s DOI. This way suppdata enables long-term reproducible data access when data was published as SI in the past or in exceptional cases today, for example if you write about a reproduction of a published article. In the latest version available from GitHub (suppdata is on its way to CRAN) the supported publishers include Copernicus Publications. The following example code downloads a data file for the article “Divergence of seafloor elevation and sea level rise in coral reef ecosystems” by Yates et al. published in Biogeosciences in 2017. The code then creates a mostly meaningless plot shown below.

# load required R extension package:
library("suppdata")

# download a specific supplemental information (SI) file
# for an article using the article's DOI:
csv_file <- suppdata::suppdata(
  x = "10.5194/bg-14-1739-2017",
  si = "Table S1 v2 UFK FOR_PUBLICATION.csv")
supplemental

# read the data and plot it (toy example!):
my_data <- read.csv(file = csv_file, skip = 3)
plot(x = my_data$NAVD88_G03, y = my_data$RASTERVALU,
     xlab = "Historical elevation (NAVD88 GEOID03))",
     ylab = "LiDAR elevation (NAVD88 GEOID03)",
     main = "A data plot for article 10.5194/bg-14-1739-2017",
     pch = 20, cex = 0.5)

{: .language-r}

Main takeaways

Authoring submission-ready manuscripts for journals of Copernicus Publications just got a lot easier. Everybody who can write manuscripts with a word processor can learn quickly R Markdown and benefit from a preproducible data science workflow. Digital notebooks not only improve day-to-day research habits, but the same workflow is suitable for authoring high-quality scholarly manuscripts and graphics. The interaction with the publisher is smooth thanks to the LaTeX submission format, but you never have to write any LaTeX. The workflow is based on an established Free and Open Source software stack and embraces the idea of preproducibility and the principles of Open Science. The software is maintained by an active, growing, and welcoming community of researchers and developers with a strong connection to the geospatial sciences. Because of the complete and consistent notebook, you, a colleague, or a student can easily pick up the work at a later time. The road to effective and transparent research begins with a first step – take it!

Acknowledgements

The software updates were contributed by Daniel Nüst from the project Opening Reproducible Research (o2r) at the Institute for Geoinformatics, University of Münster, Germany, but would not be able without the support of Copernicus Publications, the software maintainers most notably Yihui Xie and Will Pearse, and the general awesomeness of the R, R-spatial, Open Science, and Reproducible Research communities. The blog text was greatly improved with feedback by EGU’s Olivia Trani and Copernicus Publications’ Xenia van Edig. Thank you!

By Daniel Nüst, researcher at the Institute for Geoinformatics, University of Münster, Germany

[This article is cross posted-on the Opening Reproducible Research project blog]

References

Preprint power: changing the publishing scene

Preprint power: changing the publishing scene

Open access publishing has become common practice in the science community. In this guest post, David Fernández-Blanco, a contributor to the EGU Tectonics and Structural Geology Division blog, presents one facet of open access that is changing the publishing system for many geoscientists: preprints.

Open access initiatives confronting the publishing system

The idea of open access publishing and freely sharing research outputs is becoming widely embraced by the scientific community. The limitations of traditional publishing practices and the misuse of this system are some of the key drivers behind the rise of open access initiatives. Additionally, the open access movement has been pushed even further by current online capacities to widely share research as it is produced.

Efforts to make open access the norm in publishing have been active for quite some time now. For example, almost two decades ago, the European Geosciences Union (EGU) launched its first open access journals, which hold research papers open for interactive online discussion. The EGU also allows manuscripts to be reviewed online by anyone in the community, before finally published in their peer-reviewed journals.

This trend is also now starting to be reflected at an institutional level. For example, all publicly funded scientific papers in Europe could be free to access by 2020, thanks to a reform promoted in 2016 by Carlos Moedas, the European Union’s Commissioner for Research, Science and Innovation.

More recently, in late 2017, around 200 German universities and research organisations cancelled the renewal of their Elsevier subscriptions due to unmet demands for lower prices and an open access policies. Similarly, French institutions refused a new deal with Springer in early 2018. Now, Swedish researchers have followed suit, deciding to cancel their agreement with Elsevier. All these international initiatives are confronting an accustomed publishing system.

The community-driven revolution

Within this context, it’s no surprise that the scientific community has come up with various exciting initiatives that promote open access, such as creating servers to share preprints. Preprints are scientific contributions ready to be shared with other scientists, but that are not yet (or are in the process of being) peer-reviewed. A preprint server is an online platform hosting preprints and making them freely available online.

Many journals that were slow to accept these servers are updating their policies to adapt to the steadily growing increase of preprint usage by a wide-range of scientific communities. Now most journals welcome manuscripts hosted by a preprint server. Even job postings and funding agencies are changing their policies. For example, the European Research Council (ERC) Starting and Consolidator Grants are now taking applicant preprints into consideration.

Preprints: changing the publishing system

ArXiv is the oldest and most established preprint server. It was created in 1991, initially directed towards physics research. The server receives on average 10,000 submissions per month and now hosts over one million manuscripts. Arxiv sets a precedent for preprints, and now servers covering other scientific fields have emerged, such as bioRxiv and ChemRxiv.

Credit: EarthArXiv

EarthArXiv was the first to fill the preprint gap for the Earth sciences. It was launched in October 2017 by Tom Narock, an assistant professor at Notre Dame of Maryland University in Baltimore (US), and Christopher Jackson, a professor at Imperial College London (UK). In the first 24 hours after its online launch, this preprint server already had nine submissions from geoscientists.

The server holds now more than 400 preprints, approved for publication after moderation, and gets around 1,600 downloads monthly. The platform’s policy may well contribute to its success – EarthArXiv is an independent preprint server strongly supported by the Earth sciences community, now run by 125 volunteers. The logo, for example, was a crowdsourcing effort. Through social media, EarthArXiv asked the online community to send their designs; then a poll was held to decide which one of the submitted logos would be selected. Additionally, the server’s Diversity Statement and Moderation Policy were both developed communally.

Credit: ESSOAr

In February 2018, some months after EarthArXiv went live, another platform serving the Earth sciences was born: the American Geophysical Union’s Earth and Space Science Open Archive, ESSOAr. The approach between both platforms is markedly different; ESSOAr is partially supported by Wiley, a publishing company, while EarthArXiv is independent of any publishers. The ESSOAr server is gaining momentum by hosting conference posters, while EarthArXiv plans to focus on preprint manuscripts, at least for the near future. The ESSOAr server hosts currently 120 posters and nine preprints.

What is the power of preprints?

How can researchers benefit from these new online sources?

No delays:

Preprint servers allow rapid dissemination. Through preprints, new scientific findings are shared directly with other scientists. The manuscript is immediately available after being uploaded, meaning it is searchable right away. There is no delay for peer-review, editorial decisions, or lengthy journal production.

Visibility:

A DOI is assigned to the work, so it is citable as soon as it is uploaded. This is especially helpful to early career scientists seeking for employment and funding opportunities, as they can show and prove their scholarly track record at any point.

Engagement:

Making research visible to the community can lead to helpful feedback and constructive, transparent discussions. Some servers and participating authors have promoted their preprints through social media, in many cases initiating productive conversations with fellow scientists. Hence, preprints promote not only healthy exchanges, but they may also lead to improvements to the initial manuscript. Also, through these exchanges, which occur outside of the journal-led peer-review route, it is possible to network and build collaborative links with fellow scientists.

No boundaries:

Preprints allow everyone to have access to science, making knowledge available across boundaries.

The servers are open without cost to everyone forever. This also means tax payers have free access to the science they pay for.

Backup:

Preprint servers are a useful way to self-archive documents.  Many preprint servers also host postprints, which are already published articles (after the embargo period applicable to some journals).

Given the difference between the publishing industry’s current model and preprint practices, it is not surprising to find an increasing number of scientists stirring the preprint movement. It is possible that many of such researchers are driven by a motivation to contribute to a transparent process and promote open science within their community and to the public. This motivation is indeed the true power of preprints.

Editor’s note: This is a guest blog post that expresses the opinion of its author, whose views may differ from those of the European Geosciences Union. We hope the post can serve to generate discussion and a civilised debate amongst our readers.

Migrating scientists

Migrating scientists

Scientific research is no doubt enriched by interdisciplinarity and collaborations which cross borders. This, combined with the scarcity of academic positions and the need to further ones horizons by experiencing varied research environments, leads many scientists to relocate (if only on a short term basis) to a country which is not their own.  In today’s post, freelance science writer Robert Emberson explores the pros and cons of the nomadic lifestyle many researchers find themselves embracing in order to forward their work.

Scientists can consider themselves a lucky group of people. Having colleagues across the world working passionately at advancing the spectrum of human knowledge offers more opportunities to collaborate across national borders than perhaps any other field of human endeavour. Working with researchers of different nationalities is a chance to share ideas and experience; more often than not, the whole is greater than the sum of its parts.

In many cases though, this collaboration requires scientists to move their whole lives, temporarily or permanently, to new countries. Research on a given topic is almost never focused in one geographic region, and so a significant minority of scientists leave their homeland to pursue their careers. In September this year, the Twitter account @realscientists started a discussion about the implications of this movement, under the hashtag #migratingscientists. Many researchers shared inspirational and personal tales about their peripatetic lifestyles, and these brief snippets serve as a useful insight into the disruptive nature of crossing borders for work.

What are the deeper lessons we can take from scientists who migrate for work? What impact does it have on their scientific, and personal lives?

A recent analysis of published studies has suggested that migrating might well improve the career prospects of scientists. Sugimoto and colleagues analysed the citation scores of 14 million papers (between 2008 and 2015) from 16 million authors, and found that, in general, those written by scientists who moved country during that time have citation scores 40% higher than those by authors who remained put. Surprisingly, despite a perception that international collaboration is widespread, only 4% of the scientists in the dataset moved during the window of observation.

The perception of extensive movement for researchers may be coloured by science in the English-speaking world. Foreign-born researchers make up 27% of scientists or engineers in the USA, and 13% in the UK. These countries seem to benefit significantly in terms of the impact of the research produced within their borders; countries with greater mobility tend to produce more highly cited papers. It’s a mutually beneficial relationship, at least in terms of citations, and moreover researchers returning home can bring with them a wider network of colleagues, potentially boosting research and development in their own countries.

I spoke to the lead author, Professor Sugimoto, about these trends, and she told me that much of it comes down to what is available in these countries.

“Scholars do best when they have access to resources (personnel, infrastructure, and materials)”, she says. “Countries with high scientific capacity and investment also tend to have a critical mass of scholars. Collaboration has been linked to higher production and citation, so it is no surprise that those with access to enlarge their network are likely to be successful on these metrics.”

The US and UK are two countries where open borders are increasingly under attack. Immigration is always a hot-button topic, and while in both countries an opposition to immigration is not necessarily new, increased restrictions on immigration are now more likely with a Republican-led government in the US and Brexit in the UK. Already there are suggestions that researchers are increasingly looking elsewhere for positions; based on the studies, this could lead to a decline in the impact of research from these countries.

As shown by Prof Sugimoto and colleagues, scientists don’t exactly fit into the standard definition of immigrant. The researchers point toward mobility, rather than migration, as the important descriptive term here. Scientists tend to return to their home country after spending time abroad, and as such represent temporary migrants, rather than permanent. Social attitudes towards skilled workers tend to be different to those surrounding long-term immigrants and it would benefit researchers if policymakers went out of their way to emphasise that scientists fit into this category.

According to Professor Sugimoto, the short-term nature of mobility is what is most beneficial.

“Unless these scholars maintain ties with their home countries, emigration is likely to yield to deficits for other countries. Circulation, on the other hand, should yield benefits for all countries. Short-term stays can establish ties and provide an influx of resources, without necessarily removing scholars from their home networks.”

Treating scientists as visiting experts, then, is perhaps a more productive way forward.

But immigration visas and increases in citation indices are just one side of the story for scientists. Reading through some of the tweets tagged with #migratingscientists, many focus on the upheaval of their personal lives, for better or worse. It’s sometimes too easy to think about researchers as ‘human capital,’ but each of those humans have personal connections and a definition of home. Some studies suggest that foreign-born researchers may be more productive than their home-grown counterparts, but their satisfaction with life tends to be lower. What’s the deal?

Maslow’s hierarchy of needs, a framework commonly used in sociology to understand the different human requirements and personal development, suggests that the human need for Belonging is more fundamental than the requirement for Self-fulfilment. In other words, before researchers can genuinely accomplish their best work, they have a more basic need for a network of friends and family to belong to, or a place to call home. Finding this sense of belonging can be tricky in a foreign country. Language barriers can make it a struggle to meet new friends, and cultural tropes and mores may be more difficult to transcend than it first seems too, particularly when attitudes towards the researcher’s race or gender differ.

Early career researchers on short-term contracts may also struggle to maintain a sense of belonging to a particular place; extensive travel and fieldwork can exacerbate this. As a PhD student, living in a foreign country and travelling for labwork, field campaigns and conferences I sometimes felt like George Clooney’s character in the film Up in the Air, where he struggles with a life lived out of a backpack and in airport lounges.

Migrating scientists must make choices about close personal relationships; should they leave a partner behind or try to make it work long-distance? It’s doubly difficult to find positions for two people, let alone moving a more extended family. Many of the stories on twitter stress the importance of supportive partner or family.

Pay may also be lower for foreign-born scientists, too. Despite their outsize contribution to research output, foreign scientists in the US may be paid less than their peers, both in terms of salary, and the availability of funding sources. These hurdles make an already tricky transition to a new country significantly harder.

So it seems the research impact on a national and individual scale may benefit from increased mobility of researchers, but at the same time the personal tribulations may make this a challenge for many scientists.

How do scientists weigh up these pros and cons? Well, if Twitter is anything to go on, they’re clearly an enthusiastic bunch of folks, since many of the stories tend to emphasise the fun had along the way, as well as the positive experiences.

Given that these nitty-gritty questions about personal experience are unsurprisingly hard to quantify, our understanding of the impact of mobility on scientists personal lives is often based on these kind of anecdotes; it would be greatly beneficial to survey researchers more widely to ascertain what kind of systematic effects migration induces. A more qualified comparison with the citation-based indices would then be feasible.

For now, even if removing the obstacles to scientists moving across borders may raise questions amongst some policymakers, it would reduce the negative connotations of migrating for research – which might allow for wider collaboration, and a more effective global body of scientists.

By Robert Emberson, freelance science writer

Editor’s note: This is a guest blog post that expresses the opinion of its author, whose views may differ from those of the European Geosciences Union. We hope the post can serve to generate discussion and a civilised debate amongst our readers.

Enmeshed in the gears of publishing – lessons from working as a young editor

Enmeshed in the gears of publishing – lessons from working as a young editor

Editors of scientific journals play an important role in the process research publication. They act as the midpoint between authors and reviewers, and set the direction of a given journal. However, for an early career scientist like me (I only defended my PhD in early December 2016) the intricacies of editorial work remained somewhat mysterious. Many academic journals tend to appoint established, more senior scientists to these roles, and while most scientists interact with editors regularly their role is not commonly taught to more junior researchers. I was fortunate to get the chance to work, short term, as an associate editor at Nature Geoscience in the first 4 months of this year (2017). During that time, I learned a number of lessons about scientific publishing that I felt could be valuable to the community at large.

What does an editor actually do?

The role of the editor is often hidden to readers; in both paywalled and open-access journals the notes and thoughts editors make on submitted manuscripts are generally kept private. One of the first things to appreciate is that editors judge whether a manuscript meets a set of editorial thresholds that would make it appropriate for the journal in question, rather than whether the study is correctly designed or the results are robust. I’d argue most editors are looking for a balance of an advance beyond existing literature and the level of interest a manuscript offers for their audience.

At each step of the publication process, from initial submission, through judging referee comments, to making a final decision, the editor is making a judgement whether the manuscript still meets those editorial thresholds.

The vast majority of the papers I got the chance to read were pretty fascinating, but since the journal I was working for is targeted at the whole Earth science community some of these were a bit too esoteric, and as such didn’t fit the thresholds we set to appeal to the journal audience.

I actually found judging papers on the basis of editorial thresholds refreshing – in our capacity as peer reviewers, most scientists are naturally sceptical of methodology and conclusions in other studies, but as an editor in most cases I was able to take the authors conclusions at face-value, and leave the critical assessment to referees.

That’s where the important difference lies; even though editors are generally scientists by training, since they are naturally not experts in every field that they receive papers from, it’s paramount to find reviewers who have the appropriate expertise and to ask them the right set of questions. In journals with academic editors, the editors may have more leeway to make critical comments, but impartiality is key.

Much of this may be already clear to many readers, but perhaps less so to more junior scientists. Many of the editorial decisions are somewhat subjective, like gauging the level of interest to a journal audience.

In the context of open access research journals, I think it’s worth asking whether the editorial decisions should also be made openly readable by authors and referees – this might aid potential authors in deciding how to pitch their articles to a given journal. This feeds into my next point – what are journals looking for?

By which metrics do journals judge studies?
The second big thing I picked up is that the amount of work does not always equate to a paper being appropriate for a given journal. Invariably, authors have clearly worked hard, and it’s often really tricky to explain to authors that their study is not a good fit for the journal you’re working for.

Speaking somewhat cynically, journals run for profit are interested in articles that can sell more copies or subscriptions. Since the audiences are primarily scientists, “scientific significance” will be a dominant consideration, but Nature and subsidiary journals also directly compare the mainstream media coverage of some of their articles with that of Science – that competition is important to their business.

Many other authors have discussed the relative merits of “prestige” journals (including Nobel prize winners – https://www.theguardian.com/science/2013/dec/09/nobel-winner-boycott-science-journals), and all I’ll add here is what strikes me most is that ‘number of grad student hours worked’ is often not related to those articles that would be of a broader interest to the more mainstream media. The majority of articles don’t attract media attention of course, but I’d also argue that “scientific significance” is not strongly linked to the amount of time that goes into each study.

In the long run, high quality science tends to ensure a strong readership of any journal, but in my experience as an editor the quality of science in submitted manuscripts tends to be universally strong – the scientific method is followed, conclusions are robust, but in some cases they’re just pitched at the wrong audience. I’d argue this is why some studies have found in meta-analysis that in the majority of cases, articles that are initially rejected are later accepted in journals of similar ‘prestige’ (Weller et al. 2001, Moore et al. 2017).

As such, it’s imperative that authors tailor their manuscripts to the appropriate audience. Editors from every journal are picking from the same pool of peer reviewers, and so the quality of reviews should also be consistent, which ultimately determines the robustness of a study; so to meet editorial thresholds, prospective authors should think about who is reading the journal.
It’s certainly a fine line to walk – studies that are confirmatory of prior work tend to attract fewer readers, and as such editors may be less inclined to take an interest, but these are nonetheless important for the scientific canon.

In my short time as an editor I certainly didn’t see a way around these problems, but it was eye-opening to see the gears of the publication system – the machine from within, as it were.

Who gets to review?
One of the most time-consuming jobs of an editor is finding referees for manuscripts. It generally takes as long, if not far longer, than reading the manuscript in detail!

The ideal set of referees should first have the required set of expertise to properly assess the paper in question, and then beyond that be representative of the field at large. Moreover, they need to have no conflict of interest with the authors of the paper. There are an awful lot of scientists working in the world at the moment, but in some sub-fields it can be pretty hard to find individuals who fit all these categories.

For example, some studies in smaller research fields with a large number of senior co-authors often unintentionally rule out vast swathes of their colleagues as referees, simply because they have collaborated extensively.

Ironically, working with everyone in your field leaves no-one left to review your work! I have no doubt that the vast majority of scientists would be able to referee a colleagues work impartially, but striving for truly impartial review should be an aim of an editor.

As mentioned above, finding referees who represent the field is also important. More senior scientists have a greater range of experience, but tend to have less time available to review, while junior researchers can often provide more in-depth reviews of specific aspects. Referees from a range of geographic locations help provide diversity of opinion, as well as a fair balance in terms of gender.

It was certainly informative to compare the diversity of authors with the diversity of the referees they recommended, who in general tend to be more male dominated and more US-centric than the authors themselves.

A positive way of looking at this might be that this represents a diversifying Earth science community; recommended referees tend to be more established scientists, so greater author diversity might represent a changing demographic. On the other hand, it’s certainly worth bearing in mind that since reviewing is increasingly becoming a metric by which scientists themselves are judged, recommending referees who are more diverse is a way of encouraging a more varied and open community.

What’s the job like?
Editorial work is definitely rewarding – I certainly felt part of the scientific process, and providing a service to authors and the readership community is the main remit of the job.

I got to read a lot of interesting science from a range of different places, and worked with some highly motivated people. It’s a steep learning curve, and tends to be consistently busy; papers are always coming in, so there’s always a need to keep working.

Perhaps I’m biased, but I’d also suggest that scientists could work as editors at almost any stage in their careers, and it offers a neat place between the world of academia and science communication, which I found fascinating.

By Robert Emberson, freelance science writer

References

Moore, S., Neylon, C., Eve, M. P., O’Donnell, D. P., and Pattinson, D. 2017. “Excellence R Us”: university research and the fetishisation of excellence. Palgrave Communications, 3, 16105

Weller A.C. 2001 Editorial Peer Review: Its Strengths and Weaknesses. Information Today: Medford NJ