When you think about fossils, lizards might be not be one of the first groups that springs to mind. However, they do have a pretty neat fossil record, stretching back over 150 million years. One group of lizards, iguanians, are still around today and comprises about 1700 different species! One sub-group of these iguanians, acrodonts, are thought to have originated in east Gondwana – part of the ‘old world’ including Africa. Acrodonts are named after weird features in their jaws, with teeth fused to the apex of the jaws, as opposed to the inside.
SVPCA 2015
This year, the 63rd Symposium for Vertebrate Palaeontology and Comparative Anatomy is taking place alongside the 24th Symposium of Palaeontological Preparation and Conservation with the Geological Curators’ Group (what a mouth-full..), at the National Oceanographic Centre in Southampton.
I don’t have much to say about this conference, as I’m heading to it’s international cousin, SVP (Society for Vertebrate Paleontology) in Dallas soon, but it’s a pretty cool event. Most importantly, the abstracts are all freely available to read online here in advance. There’s a great range of research, mostly from researchers based in the UK, but a great chance to see some of the fab things that people are working on here in the field. Enjoy!
A thought on impact factors
OK, bear with me on this one. It’s a bit of a thought dump, but it would be interesting to see what people think.
You can’t go anywhere in academia these days without hearing about impact factors. An impact factor is a metric assigned to a journal that measures the average number of citations per article over the preceding two year interval. It was originally designed to help libraries select which journals were being used by academics in their research, and therefore which ones they could not renew subscriptions to. However, in modern day academia, it is often used to measure the individual ‘impact’, or quality, of a single paper within a journal – that is, the metric assigned to a journal is used as a proxy for the value of each article inside. It doesn’t make much sense on the face of things, especially when you here stories about how much impact factors are gamed (read: purchased) by journals and their publishers (see link below), to the extent that they are at the least meaningless, and at the worst complete lies.
The evidence suggests that the only thing that an impact factor, and journal rank, is reflective of is academic malpractice – that is, fraud. The higher an impact factor, the higher the probability that there has been data fudging of some sort (or higher probability of detection of such practice). A rather appealing option seems to be to do away with journals altogether, and replace them with an architecture built within universities that basically removes all the negative aspects of assessment of impact factors, at the same time as removing power from profit-driven parasitic publishers. It’s not really too much a stretch of the imagination to do this – for example, Latin America already uses the SciELO platform to publish its research, and is free from the potential negative consequences of the impact factor. University College London also recently established it’s own open access press, the first of its kind in the UK. The Higher Education Funding Council for England (HEFCE) recently released a report about the role of metrics in higher education, finding that the impact factor was too often mis-used or ‘gamed’ by academics, and recommended its discontinuation as a measure of personal assessment. So there is a lot of evidence that we are moving away from a system where impact factors and commercial publishers are dominating the system (although see this post by Zen Faulkes).
But I think there might be a hidden aspect behind impact factors that has often been over-looked, and is difficult to measure. Hear me out.
Impact factors, whether we like it or not, are still used as a proxy for quality. Everyone equates a higher impact factor with a better piece of research. We do it automatically as scientists, irrespective of whether we’ve even read an article. How many times do you hear “Oh, you got an article in Nature – nice one!” I’m not really sure if this means well done for publishing good work, or well done for beating the system and getting published in a glamour magazine. Either way, this is natural now within academia, it’s ingrained into the system (and by system, I include people). The flip side of this is that researchers then, following this practice, submit their research which they perceive to be of ‘higher quality’ (irrespective of any subjective ties or a priori semblance of what this might mean) to higher impact factor journals. The inverse is also true – research which is perceived to be less useful in terms of results, or lower quality, will be sent to lower impact factor journals. Quality in this case can refer to any combination of things – strong conclusions, a good data set, relevance to the field.
Now, I’m not trying to defend the impact factor and it’s use as a personal measure for researchers. But what if there is some qualitative aspect of quality that it is capturing, based on this? Instead of thinking “It’s been published in this journal, therefore it’s high quality”, it’s rethinking it as “This research is high quality, therefore I’m going to submit it to this journal.” Researchers know journals well, and they submit to venues for numerous reasons – among them is the appropriateness of that venue based on its publishing history and subject matter. If a journal publishers hardcore quantitative research, large-scale meta-analyses and the sort, then it’s probably going to accrue more citations because it’s of more ‘use’ – more applicable to a wider range of subjects or projects.
For example, in my field, Palaeontology, research typically published in high impact factor journals involves fairly ground-breaking new studies regarding developmental biology, macroevolution, extinctions – large-scale patterns that offer great insight into the history of life on Earth. On the other hand, those published in lower impact factor journals might be more technical and specialist, or perhaps regarding descriptive taxonomy or systematics – naming of a new species, for example. An obvious exception to this is anything with feathers, which makes it’s way into Nature, irrespective of it’s actual value in progressing the field (I’ll give you a clue: no-one cares about new feathered dinosaurs any more. Get over it, Nature).
So I’ll leave with a question: do you submit to higher impact factor journals if you think your research is ‘better’ in some way. And following this, do you think that impact factors capture a qualitative aspect of research quality, that you don’t really get if you think about what impact factors mean in a post-publication context? Thoughts below! Feel free to smash this thought to shreds.
The Society of Vertebrate Paleontology 2015
It’s conference season! Wooooo!! The annual meeting for the Society of Vertebrate Paleontology is looming, and is taking place in Dallas this year. Damn I love Texas! This meeting represents the bringing together of the finest minds around the world in vertebrate palaeontology, and covers the whole spectrum of this vast field. The abstract book is now online and freely available in pdf form here.