GD
Geodynamics

how science works

Writing the Methods Section

Writing the Methods Section

An important part of science is to share your results in the form of papers. Perhaps, even more important is to make those results understandable and reproducible in the Methods section. This week, Adina E. Pusok, Postdoctoral Researcher at the Department of Earth Sciences, University of Oxford, shares some very helpful tips for writing the Methods in a concise, efficient, and complete way. Writing up the methods should be no trip to fantasy land!

Adina Pusok. Postdoctoral Researcher in the Department of Earth Sciences, University of Oxford, UK.

For my occasional contribution to the Geodynamics blog, I return with (what I think) another essential topic from The Starter Pack for Early Career Geodynamicists (see end of blog post): how to write the methods section in your thesis, report or publication. Or using the original title: “Writing up the methods should be no trip to fantasy land”. Don’t get me wrong, I love the fantasy genre, but out of an entire scientific manuscript that pushes the boundaries of knowledge (with additional implications and/or speculations), the methods section should be plain and simple, objective and logically described – “just as it is”.

The motivation for this post came some months ago when I was reviewing two articles within a short time interval from each other, and I felt that some of my comments repeated – incomplete methods sections, assumptions let to be inferred by the reader, and which ultimately made assessment of the results more difficult. But I also consider it is not ok to write harsh reviews back (for these reasons), since again, there is little formal training for Early Career Scientists (ECS) on how to write scientific papers. Moreover, even when there is such formal training on academic writing, it is often generalized for all scientific disciplines, ignoring some important field-specific elements. For example, a medical trial methods section will look different from an astrophysics methods section, and within Earth Sciences, the methods section for a laboratory experiment on deformation of olivine will contain different things compared to a systematic study of numerical simulations of subduction dynamics.

A common approach by most students (especially first time) is to dump everything on paper and then hope it represents a complete collection of methods. However, with increasing complexity of studies, this collection of methods has neither heads nor tails, and is prone to errors. Such pitfalls can make the manuscript cumbersome to read or even question the validity of the research. Generally, journals do have guidelines on how the methods should be formatted, how many words, but not necessarily what to contain because it varies from field to field. I believe there should be a more systematic approach to it. So in this post, I aim at describing some aspects of the Methods section, and then propose a structure that (mostly) fits the general Geodynamics studies.

1. The scientific Methods section

The Methods section is considered one of the most important parts of any scientific manuscript (Kallet, 2004). A good Methods section allows other scientists to verify results and conclusions, understand whether the design of the experiment is relevant for the scientific question (validity), and to build on the work presented (reproducibility) by assessing alternative methods that might produce differing results.

Thus, the Methods section has one major goal: to verify the experiment layout and reproduce results.

It is also the first section to be written in a manuscript because it sets the stage for the results and conclusions presented. So, what exactly do you need to include when writing your Methods section? The title by T.M. Annesley (2010) puts it perfectly into words: “Who, what, when, where, how, and why: The ingredients in the recipe for a successful methods section”.

  • Who performed the experiment?
  • What was done to answer the research question?
  • When and where was the experiment undertaken?
  • How was the experiment done, and how were the results analyzed?
  • Why were specific procedures chosen?

Across sciences, the Methods section should contain detailed information on the research design, participants, equipment, materials, variables, and actions taken by the participants. However, what that detailed information consists of, depends on each field.

2. The Methods section for numerical modeling in Geodynamics

I propose below a structure for the Methods section intended for numerical simulations studies in Geodynamics. I want to mention that this structure is meant as a suggestion, especially for ECS, and can be adapted for every individual and study. Geodynamics studies may have different aspects: a data component (collection, post-processing), a theoretical (mathematical and physical) framework, a numerical framework (computational) and an analog component (laboratory experiments). The majority of studies have 1-2 of these components, while few will have all of them. In this post, I will focus primarily on studies that use numerical simulations to address a question about the solid earth, thus having primarily a theoretical and numerical component.

Before I start, I think a great Methods section is like a cake recipe in which your baked cake looks just like the one in the photo. All the ingredients and the baking steps need to be explained precisely and clearly in order to be reproduced. We should aim at writing the Methods with this in mind: if someone were ‘to bake’ (reproduce) my study, could they succeed based on the instructions I provided? There are many ways how to write your Methods, my way is to break it into logical sections, going from theoretical elements to numerical ones.

Proposed structure:

  1. Brief outline – A general paragraph describing the study design and the main steps taken to approach the scientific question posed in the Introduction.
  2. Theoretical framework – Any numerical simulation is based on some mathematical and physical concepts, so it’s logical to start from here. And from the most important to the least important.
    • 2.1 Governing equations – Section describing the conservation of mass, momentum, energy.
    • 2.2 Constitutive equations – Section describing all the other elements entering the conservation equations above such as: rheology (deformation mechanisms), equation of state, phase transformations, etc. Each of these topics can be explained separately in subsections. For example,
      • 2.2.1 Rheology
        • 2.2.1.1 Viscous deformation
        • 2.2.1.2 Plastic deformation
        • 2.2.1.3 Elastic deformation
      • 2.2.2 Phase transformations
      • 2.2.3 Water migration in the models
    • Figures and tables:
      • Table of parameters – for quick definition of parameters used in equations.
  3. Computational framework – Section explaining how the theory (Section 2) is solved on the computer.
    • 3.1 Numerical methods – code details, discretization methods, programming language, solvers, software libraries used, etc. If you are using a community code, these details should be provided in previous publications.
    • 3.2 Model setup – Section describing the layout of the current experiment.
      • 3.2.1 Details: model geometry, resolution (numerical and physical), parameters, initial and boundary conditions, details on rheological parameters (constitutive equations), etc.
      • 3.2.2 Must motivate the choice of parameters – why is it relevant to address the scientific questions?
    • Figures and tables:
      • Table of parameter values, rheological flow laws used.
      • Table with all model details (to reduce text).
      • Figure illustrating the model geometry, initial and boundary conditions.
    • *NOTE: If you are testing/implementing a new feature in the code, you should allocate a new section for it. Also, spend more effort to explain it into details. Do not expect many people to know about it.
  4. Study design – Section describing the layout of the study.
    • 4.1 What is being tested/varied? How many simulations were performed (model and parameter space)? Why perform those simulations/vary those parameters?
    • 4.2 Code and Data availability – code availability, input files or other data necessary to reproduce the simulation results (i.e., installation guides). Many journals today only accept for publication studies in which data and code availability is declared in standard form (i.e., AGU journals). Some other questions to answer here: where were the simulations performed? how many cores? can I reproduce data on laptop/desktop or do I need access to a cluster?
    • Figures and tables:
      • Simulations table – indicating all simulations that were run and which parameters were varied. When the number of simulations is high (i.e., Monte-Carlo sampling) you should still indicate which parameters were varied and the total number of simulations.
  5. Analysis of numerical data – details on visualization/post-processing techniques, and describe how the data will be presented in the results section. This is a step generally ignored, but be open about it: “visualization was performed in paraview/matlab, and post-processing scripts were developed in python/matlab/unicorn language by the author”. If your post-processing methods are more complex, give more details on that too (i.e., statistical methods used for data analysis).

 

Before you think you’ve finished the Methods section, go over your assumptions, and make sure you’ve explained them clearly! Geodynamics is a field in which you take a complex system (Earth or other planetary body) and simplify it to a level that we can extract some understanding about it. And in doing so, we rely on a physically consistent set of assumptions. It is important to bear in mind that this set of assumptions may not always be obvious to the audience. If your reviewers have questions about your methods and interpretation of results (that you think is obvious), it means that something was not clearly explained. Be pre-emptive and state your assumptions. As long as they are explicit and consistent, the reviewers and readers will find less flaws about your study. Why that choice of parameters? Why did you do it that way?

3. A few other things…

It’s good practice to write a complete Methods section for every manuscript, such as one following the structure above. However, some journals will ask for a short version (1-2 paragraphs) to be included in the manuscript and have the complete Methods section in a separate resource (i.e, Supplementary Data, Supporting information, repository) such that it’s made available to the community. For some other journals, it will be difficult to find a balance between completeness (sufficient details to allow replication and validity verification) and conciseness (follow the guidelines by journals regarding word count limits).

To master the writing of the Methods section, it is important to look at other examples with similar scope and aims (especially the ones you understood clearly and completely). It is also a good idea to keep notes and actually start writing up your equations, model setup, and parameters as the study progresses (such as the mandatory lab notebook).

Finally, some tips on the style of writing of the Methods section:

  • be clear, direct, and precise.
  • be complete, yet concise, to make life easy for the reader.
  • write in the past tense.
  • but use the present tense to describe how the data is presented in the paper.
  • may use both active/passive voice.
  • may use jargon more liberally.
  • cite references for commonly used methods.
  • have a structure and split into smaller sections according to topic.
  • material in each section should be organized by topic from most to least important.
  • use figures, tables and flow diagrams where possible to simplify the explanation of methods.

The Starter Pack for Early Career Geodynamicists

In the interest of not letting the dust accumulate, the growing collection of useful Geodynamics ECS posts (from/for the community):

References:

Kallet R.H. (2004) How to write the methods section of a research paper, Respir Care. 49(10):1229-32. https://www.ncbi.nlm.nih.gov/pubmed/15447808

Annesley, T.M. (2010) Who, what, when, where, how, and why: the ingredients in the recipe for a successful Methods section, Clin Chem. 56(6):897-901, doi: 10.1373/clinchem.2010.146589, https://www.ncbi.nlm.nih.gov/pubmed/20378765

On the resolution of seismic tomography models and the connection to geodynamic modelling (Is blue/red the new cold/hot?) (How many pixels in an Earth??)

What do the blobs mean?

Seismologists work hard to provide the best snapshots of the Earth’s mantle. Yet tomographic models based on different approaches or using different data sets sometimes obtain quite different details. It is hard to know for a non specialist if small scale anomalies can be trusted and why. This week Maria Koroni and Daniel Bowden, both postdocs in the Seismology and Wave Physics group in ETH Zürich, tell us how these beautiful images of the Earth are obtained in practice.

Daniel Bowden and Maria Koroni enjoying coffee in Zürich

Seismology is a science that aims at providing tomographic images of the Earth’s interior, similar to X-ray images of the human body. These images can be used as snapshots of the current state of flow patterns inside the mantle. The main way we communicate, from tomographer to geodynamicist, is through publication of some tomographic image. We seismologists, however, make countless choices, approximations and assumptions, which are limited by poor data coverage, and ultimately never fit our data perfectly. These things are often overlooked, or taken for granted and poorly communicated. Inevitably, this undermines the rigour and usefulness of subsequent interpretations in terms of heat or material properties. This post will give an overview of what can worry a seismologist/tomographer. Our goal is not to teach seismic tomography, but to plant a seed that will make geodynamicists push seismologists for better accuracy, robustness, and communicated uncertainty!

A typical day in a seismologist’s life starts with downloading some data for a specific application. Then we cry while looking at waveforms that make no sense (compared to the clean and physically meaningful synthetics calculated the day before). After a sip, or two, or two thousand sips of freshly brewed coffee, and some pre-processing steps to clean up the mess that is real data, the seismologist sets up a measurement of the misfit between synthetics and observed waveforms. Do we try to simulate the entire seismogram, just its travel time, its amplitude? The choice we make in defining this misfit can non-linearly affect our outcome, and there’s no clear way to quantify that uncertainty.

After obtaining the misfit measurements, the seismologist starts thinking about best inversion practices in order to derive some model parameters. There are two more factors to consider now: how to mathematically find a solution that fits our data, and the choice of how to choose a subjectively unique solution from the many solutions of the problem… The number of (quasi-)arbitrary choices can increase dramatically in the course of the poor seismologist’s day!

The goal is to image seismic anomalies; to present a velocity model that is somehow different from the assumed background. After that, the seismologist can go home, relax and write a paper about what the model shows in geological terms. Or… More questions arise and doubts come flooding in. Are the choices I made sensible? Should I make a calculation of the errors associated with my model? Thermodynamics gives us the basic equations to translate seismic to thermal anomalies in the Earth but how can we improve the estimated velocity model for a more realistic interpretation?

What do the blobs mean?

Figure 1: A tomographic velocity model, offshore southern California. What do the blobs mean? This figure is modified from the full paper at https://doi.org/10.1002/2016JB012919

Figure 1 is one such example of a velocity model, constructed through seismic tomography (specifically from ambient-noise surface waves). The paper reviews the tectonic history of the crust and upper mantle in this offshore region. We are proud of this model, and sincerely hope it can be of use to those studying tectonics or dynamics. We are also painfully aware of the assumptions that we had to make, however. This picture could look drastically different if we had used a different amount of regularization (smoothing), had made different prior assumptions about where layers may be, had been more or less restrictive in cleaning our raw data observations, or made any number of other changes. We were careful in all these regards, and ran test after test over the course of several months to ensure the process was up to high standards, but for the most part… you just have to take our word for it.

There’s a number of features we interpret here: thinning of the crust, upwelling asthenosphere, the formation of volcanic seamounts, etc. But it wouldn’t shock me if some other study came out in the coming years that told an entirely different story; indeed that’s part of our process as scientists to continue to challenge and test hypotheses. But what if this model is used as an input to something else as-of-yet unconstrained? In this model, could the Lithosphere-Asthenosphere Boundary (LAB) shown here be 10 km higher or deeper, and why does it disappear at 200km along the profile? Couldn’t that impact geodynamicists’ work dramatically? Our field is a collaborative effort, but if we as seismologists can’t properly quantify the uncertainties in our pretty, colourful models, what kind of effect might we be having on the field of geodynamics?

Another example comes from global scale models. Taking a look at figures 6 and 7 in Meier et al. 2009, ”Global variations of temperature and water content in the mantle transition zone from higher mode surface waves” (DOI:10.1016/j.epsl.2009.03.004), you can observe global discontinuity models and you are invited to notice their differences. Some major features keep appearing in all of them, which is encouraging since it shows that we may be indeed looking at some real properties of the mantle. However, even similar methodologies have not often converged to same tomographic images. The sources of discrepancies are the usual plagues in seismic tomography, some of them mentioned on top.

410 km discontinuity

Figure 2: Global models of the 410 km discontinuity derived after 5 iterations using traveltime data. We verified that the method retrieves target models almost perfectly. Data can be well modelled in terms of discontinuity structure; but how easily can they be interpreted in terms of thermal and/or compositional variations?

In an effort to improve imaging of mantle discontinuities, especially those at 410 and 660 km depths which are highly relevant to geodynamics (I’ve been told…), we have put some effort into building up a different approach. Usually, traveltime tomography and one-step interpretation of body wave traveltimes have been the default for producing images of mantle transition zone. We proposed an iterative optimisation of a pre-existing model, that includes flat discontinuities, using traveltimes in a full-waveform inversion scheme (see figure 2). The goal was to see whether we can get the topography of the discontinuities out using the new approach. This method seems to perform very well and it gives the potential for higher resolution imaging. Are my models capable of resolving mineralogical transitions and thermal variations along the depths of 410 and 660 km?

The most desired outcome would be not only a model that represents Earth parameters realistically but also one that provides error bars, which essentially quantify uncertainties. Providing error bars, however, requires extra computational work, and as every pixel-obsessed seismologist, we would be curious to know the extent to which uncertainties are useful to a numerical modeller! Our main question, then, remains: how can we build an interdisciplinary approach that can justify large amounts of burnt computational power?

As (computational) seismologists we pose questions for our regional or global models: Are velocity anomalies good enough, intuitively coloured as blue and red blobs and representative of heat and mass transfer in the Earth, or is it essential that we determine their shapes and sizes with greater detail? Determining a range of values for the derived seismic parameters (instead of a single estimation) could allow geodynamicists to take into account different scenarios of complex thermal and compositional patterns. We hope that this short article gave some insight into the questions a seismologist faces each time they derive a tomographic model. The resolution of seismic models is always a point of vigorous discussions but it could also be a great platform for interaction between seismologists and geodynamicists, so let’s do it!

For an overview of tomographic methodologies the reader is referred to Q. Liu & Y. J. Gu, Seismic imaging: From classical to adjoint tomography, 2012, Tectonophysics. https://doi.org/10.1016/j.tecto.2012.07.006

The past is the key

The past is the key

Lorenzo Colli

“The present is the key to the past” is a oft-used phrase in the context of understanding our planet’s complex evolution. But this perspective can also be flipped, reflected, and reframed. In this Geodynamics 101 post, Lorenzo Colli, Research Assistant Professor at the University of Houston, USA, showcases some of the recent advances in modelling mantle convection.  

 

Mantle convection is the fundamental process that drives a large part of the geologic activity at the Earth’s surface. Indeed, mantle convection can be framed as a dynamical theory that complements and expands the kinematic theory of plate tectonics: on the one hand it aims to describe and quantify the forces that cause tectonic processes; on the other, it provides an explanation for features – such as hotspot volcanism, chains of seamounts, large igneous provinces and anomalous non-isostatic topography – that aren’t accounted for by plate tectonics.

Mantle convection is both very simple and very complicated. In its essence, it is simply thermal convection: hot (and lighter) material goes up, cold (and denser) material goes down. We can describe thermal convection using classical equations of fluid dynamics, which are based on well-founded physical principles: the continuity equation enforces conservation of mass; the Navier-Stokes equation deals with conservation of momentum; and the heat equation embodies conservation of energy. Moreover, given the extremely large viscosity of the Earth’s mantle and the low rates of deformation, inertia and turbulence are utterly negligible and the Navier-Stokes equation can be simplified accordingly. One incredible consequence is that the flow field only depends on an instantaneous force balance, not on its past states, and it is thus time reversible. And when I say incredible, I really mean it: it looks like a magic trick. Check it out yourself.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk

This is as simple as it gets, in the sense that from here onward every additional aspect of mantle convection results in a more complex system: 3D variations in rheology and composition; phase transitions, melting and, more generally, the thermodynamics of mantle minerals; the feedbacks between deep Earth dynamics and surface processes. Each of these additional aspects results in a system that is harder and costlier to solve numerically, so much so that numerical models need to compromise, including some but excluding others, or giving up dimensionality, domain size or the ability to advance in time. More importantly, most of these aspects are so-called subgrid-scale processes: they deal with the macroscopic effect of some microscopic process that cannot be modelled at the same scale as the macroscopic flow and is too costly to model at the appropriate scale. Consequently, it needs to be parametrized. To make matters worse, some of these microscopic processes are not understood sufficiently well to begin with: the parametrizations are not formally derived from first-principle physics but are long-range extrapolations of semi-empirical laws. The end result is that it is possible to generate more complex – thus, in this regard, more Earth-like – models of mantle convection at the cost of an increase in tunable parameters. But what parameters give a truly better model? How can we test it?

Figure 1: The mantle convection model on the left runs in ten minutes on your laptop. It is not the Earth. The one on the right takes two days on a supercomputer. It is fancier, but it is still not the real Earth.

Meteorologists face similar issues with their models of atmospheric circulation. For example, processes related to turbulence, clouds and rainfall need to be parametrized. Early weather forecast models were… less than ideal. But meteorologists can compare every day their model predictions with what actually occurs, thus objectively and quantitatively assessing what works and what doesn’t. As a result, during the last 40 years weather predictions have improved steadily (Bauer et al., 2015). Current models are better at using available information (what is technically called data assimilation; more on this later) and have parametrizations that better represent the physics of the underlying processes.

If time travel is possible, where are the geophysicists from the future?

We could do the same, in theory. We can initialize a mantle convection model with some best estimate for the present-day state of the Earth’s mantle and let it run forward into the future, with the explicit aim of forecasting its future evolution. But mantle convection evolves over millions of years instead of days, thus making future predictions impractical. Another option would be to initialize a mantle convection model in the distant past and run it forward, thus making predictions-in-the-past. But in this case we really don’t know the state of the mantle in the past. And as mantle convection is a chaotic process, even a small error in the initial condition quickly grows into a completely different model trajectory (Bello et al., 2014). One can mitigate this chaotic divergence by using data assimilation and imposing surface velocities as reconstructed by a kinematic model of past plate motions (Bunge et al., 1998), which indeed tends to bring the modelled evolution closer to the true one (Colli et al., 2015). But it would take hundreds of millions of years of error-free plate motions to eliminate the influence of the unknown initial condition.

As I mentioned before, the flow field is time reversible, so one can try to start from the present-day state and integrate the governing equations backward in time. But while the flow field is time reversible, the temperature field is not. Heat diffusion is physically irreversible and mathematically unstable when solved back in time. Plainly said, the temperature field blows up. Heat diffusion needs to be turned off [1], thus keeping only heat advection. This approach, aptly called backward advection (Steinberger and O’Connell, 1997), is limited to only a few tens of millions of years in the past (Conrad and Gurnis, 2003; Moucha and Forte, 2011): the errors induced by neglecting heat diffusion add up and the recovered “initial condition”, when integrated forward in time (or should I say, back to the future), doesn’t land back at the desired present-day state, following instead a divergent trajectory.

Per aspera ad astra

As all the simple approaches turn out to be either unfeasible or unsatisfactory, we need to turn our attention to more sophisticated ones. One option is to be more clever about data assimilation, for example using a Kalman filter (Bocher et al., 2016; 2018). This methodology allow for the combining of the physics of the system, as embodied by the numerical model, with observational data, while at the same time taking into account their relative uncertainties. A different approach is given by posing a formal inverse problem aimed at finding the “optimal” initial condition that evolves into the known (best-estimate) present-day state of the mantle. This inverse problem can be solved using the adjoint method (Bunge et al., 2003; Liu and Gurnis, 2008), a rather elegant mathematical technique that exploits the physics of the system to compute the sensitivity of the final condition to variations in the initial condition. Both methodologies are computationally very expensive. Like, many millions of CPU-hours expensive. But they allow for explicit predictions of the past history of mantle flow (Spasojevic & Gurnis, 2012; Colli et al., 2018), which can then be compared with evidence of past flow states as preserved by the geologic record, for example in the form of regional- and continental-scale unconformities (Friedrich et al., 2018) and planation surfaces (Guillocheau et al., 2018). The past history of the Earth thus holds the key to significantly advance our understanding of mantle dynamics by allowing us to test and improve our models of mantle convection.

Figure 2: A schematic illustration of a reconstruction of past mantle flow obtained via the adjoint method. Symbols represent model states at discrete times. They are connected by lines representing model evolution over time. The procedure starts from a first guess of the state of the mantle in the distant past (orange circle). When evolved in time (red triangles) it will not reproduce the present-day state of the real Earth (purple cross). The adjoint method tells you in which direction the initial condition needs to be shifted in order to move the modeled present-day state closer to the real Earth. By iteratively correcting the first guess an optimized evolution (green stars) can be obtained, which matches the present-day state of the Earth.

1.Or even to be reversed in sign, to make the time-reversed heat equation unconditionally stable.

Conferences: Secret PhD Drivers

Conferences: Secret PhD Drivers

Conferences are an integral part of a PhD. They are the forum for spreading the word about the newest science and developing professional relationships. But as a PhD student they are more likely to be a source of palpitations and sweaty palms. This week Kiran Chotalia writes about her personal experience on conferences, and lessons learnt over the years.

Kiran Chotalia. PhD Student at Dept. of Earth Sciences, University College London, UK.

My PhD is a part of the Deep Volatiles Consortium and a bunch of us started on our pursuit of that floppy hat together. Our first conference adventure was an introduction to the consortium at the University of Oxford, where the new students were to present on themselves and their projects for a whole terrifying two minutes. At this stage, we had only been scientists in training for a few weeks and the thought of getting up in front of a room of established experts was scary, to say the least. Lesson #1: If it’s not a little bit scary, is it even worth doing? It means we care and we want to do the best we can. A healthy dose of fear can push us to work harder and polish our skills, making us better presenters. Overcoming the fear of these new situations takes up a lot of your energy. But it always helps to practice. In particular, I’ve always been encouraged to participate in presentation (poster or oral) competitions. Knowing that you’re going to be judged on your work and presentation skills encourages you to prepare. And this preparation has always helped to calm my nerves to the point where I’m now at the stage I can enjoy presenting a poster.

Regular work goals that crop up in other professions are often absent, especially when we’re starting out.  The build-up to a conference acts as a good focus to push for results and some first pass interpretations. At the conference itself, it makes sure people come to see your poster and you can start to get your face out there in your field. Lesson #2: Sign up for presentation competitions. AGU’s Outstanding Student Presentation Award (OSPA) and EGU’s Outstanding Student PICO and Poster (OSPP) awards are well established. At smaller conferences, it’s always worth asking if a competition is taking place as, speaking from experience, they can be easily missed. They also give you a good excuse to practice with your research group in preparation, providing the key component of improving your presentation skills: feedback. Lesson #3: Ask for feedback, not just on your science but your presenting too. If you’re presenting to people not in your field, practice with office mates that have no idea what you get up to. By practicing, you can begin to find your style of presenting and the best way to convey your science.

Me, (awkwardly) presenting my first poster at the Workshop on the Origin of Plate Tectonics, Locarno.

Sometimes, you’ll be going to conferences not only with your fellow PhD students, but also more senior members. They can introduce you to their friends and colleagues, extending your network, more often than not, when you are socialising over dinner, after the main working day. Lesson #4: Keep your ear to the ground. These events provide a great opportunity to let people know you are on the hunt for a job and hear about positions that might be right for you. At AGU 2018, I became the proud owner of a ‘Job Seeker’ badge, provided by the Careers Centre. It acted as a great way to segue from general job chat into potential leads. A memento that I’ll be hanging on to and dusting off for conferences to come!

One of the biggest changes to my conferencing cycle occurred last year after attending two meetings: CIDER and YoungCEED. Both were workshops geared towards learning and research, with CIDER lasting four weeks and YoungCEED lasting a week. Lesson #5: Attend research specific meetings when the opportunity arises. Even if they don’t seem to align with your research interests from the outset, they are incredible learning opportunities and a great way to expand your research horizons. By attending these meetings, the dynamic of my first conference after them shifted. There was a focus on catching up with the collective work started earlier in the year. Whilst the pace was the most exhausting I’ve experienced thus far, it was also the most rewarding.

Between all the learning and networking, faces start to become familiar. Before you know it, these faces become colleagues and colleagues quickly become friends. In our line of work, our friends are spread over continents, moving from institution to institution. They tend to offer the only opportunity to be in the same place at the same time. This also results in completely losing track of time and catching up into the early hours of the morning, so the next lesson is more subjective. Lesson #6: Know your limits. Some can stay out until 4am and rock up at the 8.30am talk. I wish I was one of these people but I have a hard time keeping my eyes open past 12.30am. Whatever works for you!

Me, presenting my most recent poster at AGU 2018 with my job seeker badge!

After the conference finishes, you are often in a place that you’ve never visited before. Lesson #7: Have a break. If you can, even an extra day or two of being a tourist is great treat after a hectic build-up as well as the conference itself. If staying for a mini holiday post-conference is not an option, make sure you take some time when you get home to rest and readjust before you get back to work and start planning for the next one.

Last but not least, Lesson #8: Don’t forget to have fun. The stress surrounding conferences and your PhD in general can at times be all consuming. Remember to enjoy the small victories of finally getting a code to run or finding time on the SEM to analyse your samples. At conferences, enjoy being surrounding scientists that are just starting out and the seasoned professionals with a back catalogue of interesting stories. And if you’re lucky enough to be at a conference somewhere sunny, make sure to get outside during the breaks and free time to soak up some vitamin D!

The Shanghai skyline after the Sino-UK Deep Volatiles Annual Meeting at Nanjing University.