GD
Geodynamics
Grace Shephard / Tobias Meier

Guest

Find out more about the blog team here.

Tomography and plate tectonics

Tomography and plate tectonics

The Geodynamics 101 series serves to showcase the diversity of research topics and methods in the geodynamics community in an understandable manner. We welcome all researchers – PhD students to Professors – to introduce their area of expertise in a lighthearted, entertaining manner and touch upon some of the outstanding questions and problems related to their fields. For our first ‘Geodynamics 101’ post for 2019, Assistant Prof. Jonny Wu from the University of Houston explains how to delve into the subduction record via seismic tomography and presents some fascinating 3D workflow images with which to test an identified oceanic slab. 

Jonny Wu, U. Houston

Tomography… wait, isn’t that what happens in your CAT scan? Although the general public might associate tomography with medical imaging, Earth scientists are well aware that ‘seismic tomography’ has enabled us to peer deeper, and with more clarity, into the Earth’s interior (Fig. 1). What are some of the ways we can download and display tomography to inform our scientific discoveries? Why has seismic tomography been a valuable tool for plate reconstructions? And what are some new approaches for incorporating seismic tomography within plate tectonic models?

Figure 1: Tomographic transect across the East Asian mantle under the Eurasian-South China Sea margin, the Philippine Sea and the western Pacific from Wu and Suppe (2018). The displayed tomography is the MITP08 global P-wave model (Li et al., 2008).

Downloading and displaying seismic tomography

Seismic tomography is a technique for imaging the Earth’s interior in 3-D using seismic waves. For complete beginners, IRIS (Incorporated Research Institutions for Seismology) has an excellent introduction that compares seismic tomography to medical CT scans.

A dizzying number of new, high quality seismic tomographic models are being published every year. For example, the IRIS EMC-EarthModels catalogue  currently contains 64 diverse tomographic models that cover most of the Earth, from global to regional scales. From my personal count, at least seven of these models have been added in the past half year – about one new model a month. Aside from the IRIS catalog, a plethora of other tomographic models are also publicly-available from journal data suppositories, personal webpages, or by an e-mail request to the author.

Downloading a tomographic model is just the first step. If one does not have access to custom workflows and scripts to display tomography, consider visiting an online tomography viewer. I have listed a few of these websites at the end of this blog post. Of these websites, a personal favourite of mine is the Hades Underworld Explorer built by Douwe van Hinsbergen and colleagues at Utrecht University, which uses a familiar Google Maps user interface. By simply dragging a left and right pin on the map, a user can display a global tomographic section in real time. The displayed tomographic section can be displayed in either a polar or Cartesian view and exported to a .svg file. Another tool I have found useful are tomographic ‘vote maps’, which provide indications of lower mantle slab imaging robustness by comparison of multiple tomographic models (Shephard et al., 2017). Vote maps can be downloaded from the original paper above or from the SubMachine website (Hosseini et al. (2018); see more in the website list below).

Using tomography for plate tectonic reconstructions

Tomography has played an increasing role in plate tectonic studies over the past decades. A major reason is because classical plate tectonic inputs (e.g. seafloor magnetic anomalies, palaeomagnetism, magmatism, geology) are independent from the seismological inputs for tomographic images. This means that tomography can be used to augment or test classic plate reconstructions in a relatively independent fashion. For example, classical plate tectonic models can be tested by searching tomography for slab-like anomalies below or near predicted subduction zone locations. These ‘conventional’ plate modelling workflows have challenges at convergent margins, however, when the geological record has been significantly destroyed from subduction. In these cases, the plate modeller is forced to describe details of past plate kinematics using an overly sparse geological record.

Figure 2: Tomographic plate modeling workflow proposed by Wu et al. (2016). The final plate model in c) is fully-kinematic and makes testable geological predictions for magmatic histories, terrane paleolatitudes and other geology (e.g. collisions) that can be compared against the remnant geology in d), which are relatively independent.

A ‘tomographic plate modelling’ workflow (Fig. 2) was proposed by Wu et al. (2016) that essentially reversed the conventional plate modelling workflow. In this method, slabs are mapped from tomography and unfolded (i.e. retro-deformed) (Fig. 2a). The unfolded slabs are then populated into a seafloor spreading-based global plate model. Plate motions are assigned in a hierarchical fashion depending on available kinematic constraints (Fig. 2b). The plate modelling will result in either a single unique plate reconstruction, or several families of possible plate models (Fig. 2c). The final plate models (Fig. 2c) are fully-kinematic and make testable geological predictions for magmatic histories, palaeolatitudes and other geological events (e.g. collisions). These predictions can then be systematically compared against remnant geology (Fig. 2d), which are independent from the tomographic inputs (Fig. 2a).

The proposed 3D slab mapping workflow of Wu et al. (2016) assumed that the most robust feature of tomographic slabs is likely the slab center. The slab mapping workflow involved manual picking of a mid-slab ‘curve’ along hundreds (and sometimes thousands!) of variably oriented 2D cross-sections using software GOCAD (Figs. 3a, b). A 3-D triangulated mid-slab surface is then constructed from the mid-slab curves (Fig. 3c). Inspired by 3D seismic interpretation techniques from petroleum geoscience, the tomographic velocities can be extracted along the mid-slab surface for further tectonic analysis (Fig. 3d).


Figure 3: Slab unfolding workflow proposed by Wu et al. (2016) shown for the subducted Ryukyu slab along the northern Philippine Sea plate. The displayed tomography in a), d) and e) is from the MITP08 global P-wave model (Li et al., 2008).

For relatively undeformed upper mantle slabs, a pre-subduction slab size and shape can be estimated by unfolding the mid-slab surface to a spherical Earth model, minimizing distortions and changes to surface area (Fig. 3e). Interestingly, the slab unfolding algorithm can also be applied to shoe design, where there is a need to flatten shoe materials to build cut patterns (Bennis et al., 1991).  The three-dimensional slab mapping within GOCAD allows a self-consistent 3-D Earth model of the mapped slabs to be developed and maintained. This had advantages for East Asia (Wu et al., 2016), where many slabs have apparently subducted in close proximity to each other (Fig. 1).

Web resources for displaying tomography

Hades Underworld Explorer : http://www.atlas-of-the-underworld.org/hades-underworld-explorer/

Seismic Tomography Globe : http://dagik.org/misc/gst/user-guide/index.html

SubMachine : https://www.earth.ox.ac.uk/~smachine/cgi/index.php

 

References

Bennis, C., Vezien, J.-M., Iglesias, G., 1991. Piecewise surface flattening for non-distorted texture mapping. Proceedings of the 18th annual conference on Computer graphics and interactive techniques 25, 237-246.

Hosseini, K. , Matthews, K. J., Sigloch, K. , Shephard, G. E., Domeier, M. and Tsekhmistrenko, M., 2018. SubMachine: Web-Based tools for exploring seismic tomography and other models of Earth's deep interior. Geochemistry, Geophysics, Geosystems, 19. 

Li, C., van der Hilst, R.D., Engdahl, E.R., Burdick, S., 2008. A new global model for P wave speed variations in Earth's mantle. Geochemistry, Geophysics, Geosystems 9, Q05018.

Shephard, G.E., Matthews, K.J., Hosseini, K., Domeier, M., 2017. On the consistency of seismically imaged lower mantle slabs. Scientific Reports 7, 10976.

Wu, J., Suppe, J., 2018. Proto-South China Sea Plate Tectonics Using Subducted Slab Constraints from Tomography. Journal of Earth Science 29, 1304-1318.

Wu, J., Suppe, J., Lu, R., Kanda, R., 2016. Philippine Sea and East Asian plate tectonics since 52 Ma constrained by new subducted slab reconstruction methods. Journal of Geophysical Research: Solid Earth 121, 4670-4741

Reproducible Computational Science

Reproducible Computational Science

 

Krister with his bat-signal shirt for reproducibility.

We’ve all been there – you’re reading through a great new paper, keen to get to the Data Availability only to find nothing listed, or the uninspiring “data provided on request”. This week Krister Karlsen, PhD student from the Centre for Earth Evolution and Dynamics (CEED), University of Oslo shares some context and tips for increasing the reproducibility of your research from a computational science perspective. Spread the good word and reach for the “Gold Standard”!

Historically, computational methods and modelling have been considered the third avenue of the sciences, but they are now some of the most important, paralleling experimental and theoretical approaches. Thanks to the rapid development of electronics and theoretical advances in numerical methods, mathematical models combined with strong computing power provide an excellent tool to study what is not available for us to observe or sample (Fig. 1). In addition to being able to simulate complex physical phenomena on computer clusters, these advances have drastically improved our ability to gather and examine high-dimensional data. For these reasons, computational science is in fact the leading tool in many branches of physics, chemistry, biology, and geodynamics.

Figure 1: Time–depth diagram presenting availability of geodynamic data. Modified from (Gerya, 2014).

A side effect of the improvement of methods for simulation and data gathering is the availability of a vast variety of different software packages and huge data sets. This poses a challenge in terms of sufficient documentation that will allow the study to be reproduced. With great computing power, comes great responsibility.

“Non-reproducible single occurrences are of no significance to science.” – Popper (1959)

Reproducibility is the cornerstone of cumulative science; the ultimate standard by which scientific claims are judged. With replication, independent researchers address a scientific hypothesis and build up evidence for, or against, it. This methodology represents the self-correcting path that science should take to ensure robust discoveries; separating science from pseudoscience. Reports indicate increasing pressure to publish manuscripts whilst applying for competitive grants and positions (Baker, 2016). Furthermore, a growing burden of bureaucracy takes away precious time designing experiments and doing research. As the time available for actual research is decreasing, the number of articles that mention a “reproducibility crisis?” are rising towards the present day peak (Fig. 2). Does this mean we have become sloppy in terms of proper documentation?

Figure 2: Number of titles, abstracts, or keywords that contain one of the following phrases: “reproducibility crisis,” “scientific crisis,” “science in crisis,” “crisis in science,” “replication crisis,” “replicability crisis”, found in the Web of Science records. Modified from (Fanelli, 2018).

Are we facing a reproducibility crisis?

A survey conducted by Nature asked 1,576 researchers this exact question, and reported 52% responded with “Yes, a significant crisis,” and 38% with “Yes, a slight crisis” (Baker, 2016). Perhaps more alarming is that 70% report they have unsuccessfully tried to reproduce another scientist’s findings, and more than half have failed to reproduce their own results. To what degree these statistics apply to our own field of geodynamics is not clear, but it is nonetheless a timely remainder that reproducibility must remain at the forefront of our dissemination. Multiple journals have implemented policies on data and software sharing upon publication to ensure the replication and reproduction of computational science is maintained. But how well are they working? A recent empirical analysis of journal policy effectiveness for computational reproducibility sheds light on this issue (Stodden et al., 2018). The study randomly selected 204 papers published in Science after the implementation of their code and data sharing policy. Of these articles, 24 contained sufficient information, whereas for the remaining 180 publications the authors had to be contacted directly. Only 131 authors replied to the request, of these 36% provided some of the requested material and 7% simply refused to share code and data. Apparently the implementation of policies was not enough, and there is still a lot of confusion among researchers when it comes to obligations related to data and code sharing. Some of the anonymized responses highlighted by Stodden et al. (2018) underline the confusion regarding the data and code sharing policy:

Putting aside for the moment that you are, in many cases, obliged to share your code and data to enhance reproducibility; are there any additional motivating factors in making your computational research reproducible? Freire et al. (2012) lists a few simple benefits of reproducible research:

1. Reproducible research is well cited. A study (Vandewalle et al., 2009) found that published articles that reported reproducible results have higher impact and visibility.

2. Code and software comparisons. Well documented computational research allows software developed for similar purposes to be compared in terms of performance (e.g. efficiency and accuracy). This can potentially reveal interesting and publishable differences between seemingly identical programs.

3. Efficient communication of science between researchers. New-comers to a field of research can more efficiently understand how to modify and extend an existing program, allowing them to more easily build upon recently published discoveries (this is simply the positive counterpart to the argument made against software sharing earlier).

“Replicability is not reproducibility: nor is it good science.” – Drummond (2009)

I have discussed reproducibility over quite a few paragraphs already, without yet giving it a proper definition. What precisely is reproducibility? Drummond (2009) proposes a distinction between reproducibility and replicability. He argues that reproducibility requires, at the minimum, minor changes in experiment or model setup, while replication is an identical setup. In other words, reproducibility refers to a phenomenon that can be predicted to recur with slightly different experimental conditions, while replicability describes the ability to obtain an identical result when an experiment is performed under precisely the same conditions. I think this distinction makes the utmost sense in computational science, because if all software, data, post-processing scripts, random number seeds and so on, are shared and reported properly, the results should indeed be identical. However, replicability does not ensure the validity of the scientific discovery. A robust discovery made using computational methods should be reproducible with a different software (made for similar purposes, of course) and small perturbations to the input data such as initial conditions, physical parameters, etc. This is critical because we rarely, if ever, know the model inputs with zero error bars. A way for authors to address such issues is to include a sensitivity analysis of different parameters, initial conditions and boundary conditions in the publication or the supplementary material section.

Figure 3: Illustration of the “spectrum of reproducibility”, ranging from not reproducible to the gold standard that includes code, data and executable files that can directly replicate the reported results. Modified from (Peng, 2011).

However, the gold standard of reproducibility in computation-involved science, like geodynamics, is often described as what Drummond would classify as replication (Fig. 3). That is, making all data and code available to others to easily execute. Even though this does not ensure reproducibility (only replicability), it provides other researchers a level of detail regarding the work-flow and analysis that is beyond what can usually be achieved by using common language. And this deeper understanding can be crucial when trying to reproduce (and not replicate) the original results. Thus replication is a natural step towards reproduction. Open-source community codes for geodynamics, like eg. ASPECT (Heister et al., 2017), and more general FEM libraries like FEniCS (Logg et al., 2012), allows for friction-free replication of results. An input-file describing the model setup provides a 1-to-1 relation to the actual results1 (which in many cases is reasonable because the data are too large to be easily shared). Thus, sharing the post-processing scripts accompanied by the input file on eg. GitHub, will allow for complete replication of the results, at low cost in terms of data storage.

Light at the end of the tunnel?

In order to improve practices for reproducibility, contributions will need to come from multiple directions. The community needs to develop, encourage and maintain a culture of reproducibility. Journals and funding agencies can play an important role here. The American Geosciences Union (AGU) has shared a list of best practices regarding research data2 associated with a publication:

• Deposit the data in support of your publication in a leading domain repository that handles such data.

• If a domain repository is not available for some of all of your data, deposit your data in a general repository such as Zenodo, Dryad, or Figshare. All of these repositories can assign a DOI to deposited data, or use your institution’s archive.

• Data should not be listed as “available from authors.”

• Make sure that the data are available publicly at the time of publication and available to reviewers at submission—if you are unable to upload to a public repository before submission, you may provide access through an embargoed version in a repository or in datasets or tables uploaded with your submission (Zenodo, Dryad, Figshare, and some domain repositories provide embargoed access.) Questions about this should be sent to journal staff.

• Cite data or code sets used in your study as part of the reference list. Citations should follow the Joint Declaration of Data Citation Principles.

• Develop and deposit software in GitHub which can be cited, or include simple scripts in a supplement. Code in Github can be archived separately and assigned a DOI through Zenodo for submission.

In addition to best practice guidelines, wonderful initiatives from other communities include a research prize. The European College of Neuropsychopharmacology offers a (11,800 USD) award for negative results, more specifically for careful experiments that do not confirm an accepted hypothesis or previous result. Another example is the International Organization for Human Brain Mapping who awards 2,000 USD for the best replication study − successful or not. Whilst not a prize per se, at recent EGU General Assemblies in Vienna the GD community have held sessions around the theme of failed models. Hopefully, similar initiatives will lead by example so that others in the community will follow.

1To the exact same results, information about the software version, compilers, operating system etc. would also typically be needed.

2 AGU’s definition of data includes all code, software, data, methods and protocols used to produce the results here.

References

AGU, Best Practices. https://publications.agu.org/author-resource-center/publication-policies/datapolicy/data-policy-faq/ Accessed: 2018-08-31.

Baker, Monya. Reproducibility crisis? Nature, 533:26, 2016.

Drummond, Chris. Replicability is not reproducibility: nor is it good science. 2009.

Fanelli, Daniele. Opinion: Is science really facing a reproducibility crisis, and do we need it to?Proceedings of the National Academy of Sciences, 115(11):2628–2631, 2018.

Freire, Juliana; Bonnet, Philippe, and Shasha, Dennis. Computational reproducibility: state-of-theart, challenges, and database research opportunities. In Proceedings of the 2012 ACM SIGMOD international conference on management of data, pages 593–596. ACM, 2012.

Gerya, Taras. Precambrian geodynamics: concepts and models. Gondwana Research, 25(2):442–463, 2014.

Heister, Timo; Dannberg, Juliane; Gassm"oller, Rene, and Bangerth, Wolfgang. High accuracy mantle convection simulation through modern numerical methods. II: Realistic models and problems. Geophysical Journal International, 210(2):833–851, 2017. doi: 10.1093/gji/ggx195. URL https://doi.org/10.1093/gji/ggx195.

Logg, Anders; Mardal, Kent-Andre; Wells, Garth N., and others, . Automated Solution of Differential Equations by the Finite Element Method. Springer, 2012. ISBN 978-3-642-23098-1. doi: 10.1007/978-3-642-23099-8.

Peng, Roger D. Reproducible research in computational science. Science, 334(6060):1226–1227, 2011.

Popper, Karl Raimund. The Logic of Scientific Discovery . University Press, 1959.

Stodden, Victoria; Seiler, Jennifer, and Ma, Zhaokun. An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences , 115(11):2584–2589, 2018.

Vandewalle, Patrick; Kovacevic, Jelena, and Vetterli, Martin. Reproducible research in signal processing. IEEE Signal Processing Magazine , 26(3), 2009

Thirteen planets and counting

Thirteen planets and counting

Apart from our own planet Earth, there are a lot of Peculiar Planets out there! In this series we take a look at a planetary body or system worthy of our geodynamic attention, and this week we move back to our own solar system. Many of us will clearly remember the downgrading of Pluto as a planet nearly 12 years ago to the month. In this informative and witty post, Laurent Montesi from the University of Maryland makes a case for reinstating Pluto of planetary status, plus a handful of others, or at least a review of definitions. Bring on Club Planet! 

Laurent Montesi

A planet is a celestial body that (a) is in orbit around the Sun, (b) has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape, and (c) has cleared the neighbourhood around its orbit. When Resolution 5A was passed by the International Astronomical Union (IAU) during the closing ceremony of its 2006 General Assembly, Pluto was “demoted” from the rank of the true planets to a dwarf planet. Children’s eyes filled with tears over the injustice made to “poor little Pluto”, textbooks were rewritten, and the Nine Pizzas that My Very Excellent Mother Just Served Us turned to Noodles.

I don’t really care.

I see where the IAU came from when crafting this definition, and to some extent, I agree with it. But it is not relevant to me. The thing is, I am not an astronomer. I recognize the authority of the IAU to name geological features on planets and other worlds, but I’m a geologist. Pluto, like many other solar system objects, has too much exciting geology to be ignored!

Figure 1: The five Dwarf Planets currently recognized by the IAU, proud members of Club Planet.

 

To me, a dwarf planet is first and foremost a planet, and what interests me in planets is their geological activity. If, as stated in part b of the IAU definition, an object is able to “overcome rigid body forces” (whatever that means), that should leave a geological trace. I don’t care if the planet cleared its neighbourhood or not.

So, I take the IAU definition as an invitation for Ceres, Pluto, Eris, Haumea, and Makemake to join the exclusive Club Planet (Figure 1). They all bring interesting geology to the Club. Look at the results of the Dawn and New Horizons missions! Ceres has mountains, fractures, oddly hexagonal craters, and a remarkable bright spot beckoning explorers to study its water-rich interior. Pluto has become a superstar of planetary exploration, with oceans of frozen nitrogen, diverse terrains, large rifts, perhaps a giant ice volcano, and the cutest heart tattoo in the solar system.

I’d like to go even further and open the door of the Club to many satellites (Figure 2). Our Moon is the doorway towards understanding the early evolution of terrestrial planets like the Earth. It taught us about giant impacts and magma oceans. If you want to find liquid water (and possibly life) today, go to Europa or Enceladus. If you are looking for alien plate tectonics, check out Ganymede and Europa. Are you searching for a thick atmosphere, rivers, and lakes? Welcome to Titan. Who has the most volcanic activity today? Please stand up, Io. What incredible rifts you have, Miranda and Charon! From a geological standpoint, satellites are as rich as any planet.

Figure 2: Knocking at the door of Club Planet are several of the satellites of the solar system: Earth’s Moon, the four Galilean Satellites Io, Europa, Ganymede, and Callisto, the large moons Titan and Triton, as well as numerous smaller, but geologically interesting satellites. They are led by Pluto’s moon Charon.

 

So, what actually is a planet? To the ancient Greeks, they were dots of light wandering against the rigid background of the night sky. These dots then turned out to be balls. Galileo saw four satellites around Jupiter, and in the redesigned solar system, planets could only orbit the Sun. Eventually, so many objects were found that it was decided that it mattered whether a planet “cleared their planetary neighbourhood” or not. Some objects were not enough of a bully to be regarded as a full planet, so they were called dwarfs. All along, astronomy guided our thinking about what is a planet and what is not.

Interestingly, the 2006 IAU definition merges astronomy and geophysics: what does it matter to an astronomer that the object has reached hydrostatic equilibrium? That is a geophysical criterion. Perhaps it matters in the sense that the interior is fluid enough that one should consider how dissipation influences orbital evolution. If that is the case, though, can tidal interaction with satellites be regarded separately?

I don’t know why the IAU was interested in hydrostatic equilibrium, or even if that is a valid question to consider, because, once again, I am not an astronomer. I’m a geologist. I study the geological activity and the interior evolution of… well… planets… and dwarf planets… and satellites… perhaps exoplanets one day… although not the ice giants and gas giants because, as far as I am concerned, they are different beasts altogether.

The fact is, the IAU definition does not help me. Perhaps there could be a geological definition of a planet, or whatever you want to call the various objects I am interested in. Perhaps the International Union of Geodesy and Geophysics (IUGG) — which, like the IAU, is a member of the International Science Council — could propose a definition more in line with my research interests, but as far as I know, there is no discussion of that.

In the meantime, resistance to the IAU definition is growing in our community. David Grinspoon and Alan Stern recently published a Perspective in The Washington Post1. Around twenty scientists got together to discuss a “Geophysical Planet Definition” at the start of the 2018 Lunar and Planetary Conference. One major point of agreement was that no one should feel obligated to follow the IAU’s definition (we are all rebels now), or any other definition.

At the 2017 Lunar and Planetary Conference, Kirby Runyon and coworkers proposed the following “Geophysical Planet Definition”2: A planet is a sub-stellar mass body that has never undergone nuclear fusion and that has sufficient self-gravitation to assume a spheroidal shape adequately described by a triaxial ellipsoid regardless of its orbital parameters. I find there is a lot to like with this proposal. For example, it would allow me to consider satellites as planets. If I focus on internal evolution, it doesn’t really matter what object my planet is orbiting. Of course, this influences the possibility of tidal heating, but I can regard that as an external energy flux, like the energy of accretion for impacts.

Interestingly, the draft “Geophysical Planet Definition” does not explicitly mention hydrostatic equilibrium. In the IAU definition, the hydrostatic equilibrium criterion implies that planets have a minimum size. It also assumes that the planet behaves as a fluid. In that case, what are we to do with the solid planets, like the Earth? We have evidence of frozen hydrostatic bulges, especially for the Moon. In other words, geological bodies can be strong enough to support a significant deviation from hydrostatic equilibrium. Hydrostatic equilibrium is not the best way to define a planet from a geological standpoint.

Figure 3: Ratio of relief scaled by planetary radius against mean radius based on best fitting triaxial ellipsoid for a variety of solar system objects, drawn following Melosh (2011). The maximum relief is controlled by friction for objects smaller than ~100 km in diameter and by strength for larger objects. Note that some objects like Mercury and Venus do not appear on this graph as they have no measurable flattening, due to their small rotation rate. Gas and ice giants appear to deviate from the trend of solid planets.

Where the IAU definition focuses on the driving force, it may instead be useful to focus on the strength of the planet. In his Planetary Surface Processes book, Jay Melosh discusses the relation between strength and gravity3. He concludes that for small bodies, relief (quantified as the difference between the maximum and minimum radius of an object, divided by the average radius) is independent of size, whereas it decreases inversely with the square of the average radius for larger solar system objects. In these larger objects, relief is limited by the strength of the body. The transition between these two trends is a planetary diameter of 200 to 400 km (Figure 3). This division leaves all of the objects for which we have evidence of geological activity driven by internal processes safely within the category of planets. Ancient planetesimals were probably big enough to be regarded as planets, and indeed, evidence for internal differentiation suggests that their interior was quite active.

So, in my view, a planet is simply a body large enough to have small relief as compared to its radius. This is evidence of relatively low internal strength, which allows geological activity to take place. I don’t need to consider where it orbits, and if it cleared its “planetary neighbourhood” or not, as that doesn’t affect geology. The pitfall of my very inclusive view of what is a planet is the consequentially large number of objects to consider, but variety is the spice of life. Why limit the diversity of geological activity to consider?

There can be subcategories, as Alan Stern actually advocated: gas giants, ice giants, terrestrial planets, dwarf planets, satellite planets, even exoplanets. From a geological standpoint, the ones I am least likely to study are actually the giant planets, whose activity is dominated by atmospheric processes. But feel free to consider them.

Perhaps I should leave the term “planets” to the astronomers, and advocate instead for a new term, “geological worlds”. What remains is, whichever classification you choose to adopt should be adapted to the research you do. For me, I want to embrace the geological diversity of our solar system.

 

 

Further reading: 

David Grinspoon and Alan Stern (2018), Yes, Pluto is planet, Speaking of Science – Perspective, Washington Post, May 7

Runyon, K.D., S.A. Stern, T.R. Lauer, W. Grundy, M.E. Summers, and K.N. Singer (2017), A Geophysical Planet Definition, Lunar and Planetary Science XLVIII, Abstract 1448

Jay Melosh (2011) Planetary Surface Processes, Chapter 3, ISBN 9780511977848 

Postcard from Tokyo: JpGU2018 conference

Postcard from Tokyo: JpGU2018 conference

Konichiwa from Tokyo and JpGU2018!

This week, 20-24 May, the Japanese Geoscience Union (JpGU) is holding its annual union meeting just outside of Tokyo, in Chiba (about 40 minutes by metro). I am fortunate enough to be on a research visit to the Earth-Life Science Institute (ELSI) at Tokyo Tech over on the other side of the city and so attending JpGU was a bonus. It is my first time in attendance and I was very interested to see the program and thematics, and meet some of the wider Japanese geoscience community.

 

JpGU poster and exhibitor hall

 

Being a national body there is naturally a focus on Japanese geoscience specialties and interests. Japanese language also featured heavily – at abstract the author selects which language the presentation will be in, as sessions can be English and/or Japanese – and attendees were notified in advance based on the final program’s language code. Last year there were over 8000 participants and 5,600 presentations, and the meeting is comprised of oral plus poster, and poster-only sessions. The meeting encompasses “all the Earth and Planetary Sciences disciplines and related fields” and would include Geodynamics under the “Solid Earth” section. Within this section there were 15 sessions (all in English), including Planetary cores: Structure, formation, and evolution; Probing the Earth’s interior with geophysical observation and on seafloor; Structure and Dynamics of Earth and Planetary Mantles; and Oceanic and Continental Subduction Processes, to name a few.

As with the EGU General Assembly, it is a five-day conference but notably shifted to run from Sunday to Thursday. While it was at the cost of a Sunday sleep-in, the weekend start meant that high school students were able to attend and even present their own posters. Some of the union sessions were also open to the public free of charge (so no doubt an unexpected windfall for some of the people at what seemed to be a furniture and toy convention next door). The week also included an awards ceremony, including the JpGU Union level “Miyake Prize” which was awarded to Professor Eiji Ohtani from Tohoku University. For the early career attendees, there were 5 minute pop-up bar talks for ECRs under 35 years of age with the lure of a free t-shirt and a beverage, as well as a student lounge.

 

JpGU2018 awardees and new Fellows

 

There were quite a few outreach and skill-building sessions, including “Mental care and Communication Strategies for Researchers”, “Kitchen Earth Science: brain stimulation by hands-on experiments,” “Role of Open Data and Science in the Geosciences,” “Employment and work balance of female geoscientists in Japan”  and an exciting “Collaboration and Co-creation between Geoscience and Art.” There were also a number of exhibitors including our very own Philippe Courtial, Executive Secretary of EGU who was a panel speaker in the AGU/EGU/JpGU joint session “Ethics and the Role of Scientific Societies – Leadership Perspectives”. I also found out there is a relatively new open-access journal for JpGU called Progress in Earth and Planetary Science (PEPS) (note, 1000 EUR APC for non-JpGU members or 200 EUR for members).

 

Left: NASA hyperwall and presentation to high school students. Right: Philippe Courtial at the EGU booth

Science aside, my visit to Japan has been a multi-sensory delight and can only recommend coming back here in a scientific and/or tourist capacity! If you would like to combine your own travels with the next JpGU, the dates are:

  • May 26-30 2019, Chiba
  • May 24-28 2020, Chiba
  • May 30-June 3 2021, Yokohama

ありがとうございます!

Plenty of fabulous sights, sounds and smells!