AS
Atmospheric Sciences

climate change

The perfect ice floe

The perfect ice floe

Current position: 89°31.85 N, 62°0.45 E, drifting with a multi-year ice floe (24th August 2018)

With a little more than three weeks into the Arctic Ocean 2018 Expedition, the team has found the right ice floe and settled down to routine operations.

Finding the perfect ice floe for an interdisciplinary science cruise is not an easy task. The Arctic Ocean 2018 Expedition aims to understand the linkages between the sea, microbial life, the chemical composition of the lower atmosphere and clouds (see previous blog entry) in the high Arctic. This means that the “perfect floe” needs to serve a multitude of scientific activities that involve sampling from open water, drilling ice cores, setting up a meteorological tower, installing balloons, driving a remotely operated vehicle, measuring fluxes from open leads and sampling air uncontaminated from the expedition activities. The floe hence needs to be composed of multi-year ice, be thick enough to carry all installations but not too thick to allow for drilling through it. There should also be an open lead large enough for floating platforms and the shape of the floe needs to be such that the icebreaker can be moored against it on the port or starboard side facing all for cardinal directions depending on where the wind is coming from.

The search for the ice floe actually turned out to be more challenging than expected. The tricky task was not only to find a floe that would satisfy all scientific needs, but getting to it north of 89°N proved exceptionally difficult this year. After passing the marginal ice zone north of Svalbard, see blue line on the track (Figure 2), progress through the first year ice was relatively easy. Advancing with roughly 6 knots, that is about 12 km/h, we advanced quickly. After a couple of days however, the ice became unexpectedly thick with up to three meters. This made progress difficult and slow, even for Oden with her 24,500 horse powers. In such conditions the strategy is to send a helicopter ahead to scout for a convenient route through cracks and thinner ice. However, persistent fog kept the pilot from taking off which meant for the expedition to sit and wait in the same spot. For us aerosol scientists looking at aerosol-cloud interactions this was a welcome occasion to get hand on the first exciting data. In the meantime, strong winds from the east pushed the pack ice together even harder, producing ridges that are hard to overcome with the ship. But with a bit of patience and improved weather conditions, we progressed northwards keeping our eyes open for the floe.

Figure 2: Cruise track with drift. The light red line indicates the track to the ice floe, the dark red line indicates the drift with the floe. The thin blue line is the marginal ice zone from the beginning of August.

As it happened, we met unsurmountable ice conditions at 89°54’ N, 38°32’ E, just about 12 km from the North Pole – reason enough to celebrate the farthest North.

Figure 3: Expedition picture at the North Pole. (Credit: SPRS)

Going back South from there it just took a bit more than a day with helicopter flights and good visibility until we finally found ice conditions featuring multiple floes.

And here we are. After a week of intense mobilization on the floe, the four sites on the ice and the instrumentation on the ship are now in full operation and routine, if you stretch the meaning of the term a bit, has taken over. A normal day looks approximately like this:

7:45:  breakfast, meteorological briefing, information about plan of the day; 8:30 – 9:00: heavy lifting of material from the ship to the ice floe with the crane; 9:00 (or later): weather permitting, teams go to the their sites, CTDs are casted from the ship if the aft is not covered by ice; 11:45: lunch for all on board and pick-nick on the floe; 17:30: end of day activities on the ice, lifting of the gangway to prevent polar bear visits on the ship; 17:45: dinner; evening: science meetings, data crunching, lab work or recreation.

Figure 4: Sites on the floe, nearby the ship. (Credit: Mario Hoppmann)

At the balloon site, about 200 m from the ship, one balloon and one heli-kite are lifted alternately to take profiles of radiation, basic meteorological variables and aerosol concentrations. Other instruments are lifted up to sit over hours in and above clouds to sample cloud water and ice nucleating particles, respectively. At the met alley, a 15 m tall mast carries radiation and flux instrumentation to characterize heat fluxes in the boundary layer. The red tent at the remotely operated vehicle (ROV) site houses a pool through which the ROV dives under the flow to measure physical properties of the water. The longest walk, about 20 minutes, is to the open lead site, where a catamaran takes sea surface micro layer samples, a floating platform observes aerosol production and cameras image underwater bubbles. The ice core drilling team visits different sites on the floe to take samples for microbial and halogen analyses.

Open Lead site. (Credit: Julia Schmale)

Importantly, all activities on the ice need to be accompanied by bear guards. Everybody carries a radio and needs to report when they go off the ship and come back. If the visibility decreases, all need to come in for safety reasons. Lab work and continuous measurements on the ship happen throughout the day and night. More details on the ship-based aerosol laboratory follow in the next contribution.

Edited by Dasaraden Mauree


Julia Schmale is an atmospheric scientist at the Paul Scherrer Institute in Switzerland. Her research focuses on aerosol-cloud interactions in extreme environments. She is a member of the Atmosphere Working Group of the International Arctic Science Committee and a member of the Arctic Monitoring and Assessment Programme Expert Group on Short-lived Climate Forcers .

Buckle up! Its about to get bumpy on the plane.

Buckle up! Its about to get bumpy on the plane.

Clear-Air Turbulence (CAT) is a major hazard to the aviation industry. If you have ever been on a plane you have probably heard the pilots warn that clear-air turbulence could occur at any time so always wear your seatbelt. Most people will have experienced it for themselves and wanted to grip their seat. However, severe turbulence capable of causing serious passengers injuries is rare. It is defined as the vertical motion of the aircraft being strong enough to force anyone not seat belted to leave the chair or floor if they are standing. In the United States alone, it costs over 200 million US dollars in compensation for injuries, with people being hospitalised with broken bones and head injuries. Besides passengers suffering serious injuries, the cabin crew are most vulnerable as they spend most of the time on their feet serving customers. This results in an additional cost if they are injured and unable to work.

Clear-air turbulence is defined as high altitude inflight bumpiness away from thunderstorm activity. It can appear out of nowhere at any time and is particularly dangerous because pilots can’t see or detect it using on-board instruments.  Usually the first time a pilot is aware of the turbulence is when they are already flying through it. Because it is a major hazard, we need to know how it might change in the future, so that the industry can prepare if necessary. This could be done by trying to improve forecasts so that pilots can avoid regions likely to contain severe turbulence or making sure the aircraft can withstand more frequent and severe turbulence.

Our new paper published in Geophysical Research Letters named ‘Global Response of Clear-Air Turbulence to Climate Change’ aims at understanding how clear-air turbulence will change in the future around the world and throughout the year. What our study found was that, the busiest flight routes around the world would see the largest increase in turbulence. For example, the North Atlantic, North America, North Pacific and Europe (see Figure 1) will see a significant increase in severe turbulence which could cause more problems in the future. These regions see the largest increase because of the Jet Stream. The Jet Stream is a fast flowing river of air that is found in the mid-latitudes. Clear-air turbulence is predominantly caused by the wind traveling at different speeds around the Jet Stream. Climate change is expected to increase the Jet Stream speed and therefore increase the vertical wind shear, causing more turbulence.

To put these findings in context, severe turbulence in the future will be as frequent as moderate turbulence historically. Anyone who is a frequent flyer will have likely experienced moderate turbulence at some point, but fewer people have experienced severe turbulence. Therefore, this study suggests this will change in the future with most frequent flyers experiencing severe turbulence on some flight routes as well as even more moderate turbulence. Our study also found moderate turbulence will become as frequent in the summer as it has done historically in winter. This is significant because although clear-air turbulence is more likely in winter, it will however now become much more of a year round phenomenon (see Figure 2).

Figure 2: Seasonal variation in turbulence intensity.

 

This increase in clear-air turbulence highlights the importance for improving turbulence forecasting. Current research has shown that using ensemble forecasts (many forecasts of the same event) and also using more turbulence diagnostics than the one we used in this study can improve the forecast skill. By improving the forecasts, we could consistently avoid the areas of severe turbulence or make sure passengers and crew are seat-belted before the turbulence event occurs. Unfortunately, as these improvements are not yet fully operational, you can still reduce your own risk of injury by making sure you wear your seat belt as much as possible so that, if the aircraft does hit unexpected turbulence, you would avoid serious injuries.


This blog has been prepared by Luke Storer (@LukeNStorer), Department of Meteorology, University of Reading, Reading, UK and edited by Dasaraden Mauree (@D_Mauree). 

How can we use meteorological models to improve building energy simulations?

How can we use meteorological models to improve building energy simulations?

Climate change is calling for various and multiple approaches in the adaptation of cities and mitigation of the coming changes. Because buildings (residential and commercial) are responsible of about 40% of energy consumption, it is necessary to build more energy efficient ones, to decrease their contribution to greenhouse gas emissions.

But what is the relation with the atmosphere. It is two folds: firstly, in a previous post, I have already described what is the impact of the buildings / obstacles on the air flow and on the air temperature. Secondly, the fact that the climate or surrounding environment is influenced, there will be a significant change in the energy consumption of these buildings.  Currently, building energy simulation tool are using data usually gathered outside of the city and hence not representative of the local context. Thus it is crucial to be able to have necessary tools that capture both the dynamics of the atmosphere and also those of a building to design better and more sustainable urban areas.

In the present work, we have brought these two disciplines together by developing a multi-scale model. On the one side, a meteorological model, the Canopy Interface Model (CIM), was developed to obtain high resolution vertical profile of wind speed, direction and air temperature. On the other hand, an energy modelling tool, CitySim, is used to evaluate the energy use of the buildings as well as the irradiation reaching the buildings.

With this coupling methodology setup we have applied it on the EPFL campus, in Switzerland.  We have compared the modelling results with data collected on the EPFL campus for the year 2015. The results show that the coupling lead to a computation of the meteorological variables that are in very good agreement. However, we noted that for the wind speed at 2m, there is still some underestimation of the wind speed. One of the reason for this is that the wind speed close to the ground is very low and there is a higher variability at this height.

Comparison of the wind speed (left) and air temperature (right) at 2m (top) and 12m (bottom).

We intend to improve this by developing new parameterization in the future for the wind speed in an urban context by using data currently being acquired in the framework of the MoTUS project. One surprising result from this part of the study, is the appearance inside of an urban setup of a phenomena call Cold Air Pools which is very typical of valleys. The reason for this is the lack of irradiation reaching the surface inside of dense urban parts.

Furthermore, we have seen some interesting behaviour in the campus for some particular buildings such as the Rolex Learning Center. Buildings with different forms and configuration, reacted very differently with the local and standard dataset. We designed a series of additional simulation using multiple building configuration and conducted a sensitivity analysis in order to define which parameters between the wind speed and the air temperature had a more significant impact on the heating demand (see Figure 1). We showed that the impact of a reduction of 1°C was more important than a reduction of 1m s-1.

Figure 1. Heating demand of the five selected urban configurations (black dots), as function of the variation by +1°C (red dots) and -1°C (blue dots) of the air temperature, and by +1.5 m s-1 (violet dots) and -1.5 m s-1 (orange dots).

Finally, we also analysed the energy consumption of the whole EPFL campus. When using standard data, the difference between the simulated and measured demand was around 15%. If localized weather data was used, the difference was decreased to 8%. We have thus been able to reduce the uncertainty of the data by 2. The use of local data can hence improve the estimation of building energy use and will hence be quite important when building become more and more efficient.

Reference / datasets

The paper (Mauree et al., Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale, PLOS One, 2017) can be accessed here and data needed to reproduce the experiment are also available on Zenodo.

The art of turning climate change science to a crochet blanket

The art of turning climate change science to a crochet blanket

We welcome a new guest post from Prof. Ellie Highwood on why she made a global warming blanket and how you could the same!

What do you get when you cross crochet and climate science?
A lot of attention on Twitter.
At the weekend I like to crochet. Last weekend I finished my latest project and posted the picture on Twitter. And then had to turn the notifications off because it all went a bit noisy. The picture of my “global warming blanket” rapidly became my top tweet ever, with more retweets and likes than anything else. Apparently I had found a creative way to visualise trends in global mean temperature. I particularly liked the “this is the most frightening knitwear I have seen all year” comment. Given the interest on Twitter I thought I had better answer a few of the questions in a blog post. Also, it would be great if global warming blankets appeared all over the world.

How did you get the idea?
The global warming blanket was based on “temperature” blankets made by crocheters around the world. Their blankets consist of one row, or square, of crochet each day, coloured according to the temperature at their location. They look amazing and show both the annual cycle and day-to-day variability. Other people make “sky” blankets where the colours are based on the sky colour of the day – this results in a more muted grey-blue-white colour palette.
I wondered what the global temperature series would look like as a blanket. Also, global warming is often explained as greenhouse gases acting like a blanket, trapping infrared radiation and keeping the Earth warm. So that seemed like an interesting link. I also had done several rainbow themed blankets in the past and had a lot of yarn left that needed using.

Where did the data come from?

I used the annual and global mean temperature anomaly compared to 1900-2000 mean as a reference period as available from NOAA. This is what the data looks like shown more conventionally.

Global temperature anomalies (source: NOAA)

I then devised a colour scale using 15 different colours each representing a 0.1 °C data interval. So everything between 0 and 0.099 was in one colour for example. Making a code for these colours, the time series can be rewritten as in the table below. It is up to the creator to then choose the colours to match this scale, and indeed which years to include. I was making a baby sized blanket so chose the last 100 years, 1916-2016.

Because of these choices, and the long reference period, much of the blanket has relatively muted colour differences that tend to emphasise the last 20 years or so. There are other data sets available, and other reference periods and it would be interesting to see what they looked like. Also the colours I used were determined mainly by what I had available; if I were to do another one, I might change a few around (dark pink looks too much like red in the photograph and needed a darker blue instead of purple for the coldest colour), or even use a completely different colour palette – especially as rainbow colour scales aren’t great as they can distort data and render it meaningless if you are colour blind. Ed Hawkins kindly provided me with a more user friendly colour scale which I love and may well turn into a scarf for myself (much quicker than a blanket!).

#endrainbow colour scale (from E. Hawkins)

How can I recreate this?
If you want to create something similar, you will need 15 different colours if you want to do the whole 1850-2016 period. You will need relatively more yarn in colours 3-7 than other colours (if, like me you are using your stash). You can use any stitch or pattern but since you want the colour changes to be the focus of the blanket, I would choose something relatively simple. I used rows of treble crochet (UK terms) and my 100 years ended up being about 90 cm by 110 cm. You can of course choose any width you like for your blanket, or make a scarf by doing a much shorter foundation row. It goes without saying that it could also be knitted. Or painted. Or woven. Or, whatever your particular craft is.

If you look closely (check out arrows on the figure at the top) you can see the 1997-1998 El Nino (relatively warm yellow stripe amongst the pink – in this photo the dark pink looks red – I might change this colour if I did it again), 1991/92 Pinatubo eruption (relatively cool pink year) as well as cool periods 1929, and 1954-56 and the relatively warm 1940-46. Remember that these are global temperature anomalies and may not match your own personal experience at a given location!

Table with the colour codes used to make the global warming blanket

How long did it take?
I used a very simple stitch, so for a blanket this size, it was a couple of months (note I only crochet in the evenings 2 or 3 evenings a week for a couple of hours with more at some weekends). It helped that the Champions League was on during this time as other members of the household were happy to sit around watching football whilst I crocheted. Weave the ends in as you go. There are a lot of them, and I had to do them all at the end. The time flies because….

Why do I crochet?
I like crochet because you can do simple projects whilst thinking about other things, watching TV or listening to podcasts, or, you can do more complicated things which require your full attention and divert your brain from all other things. There is also something meditative about crochet, as has been discussed here. I find it a good way to destress. Additionally, a lot of what I make is for gifts or for charities and that is a really good feeling.

What’s next?
Suggestions have come in for other time series blankets e.g. greys for aerosol optical depth punctuated by red for volcanic eruptions, oranges and yellows punctuated by black for solar cycle (black being high sun spot years), a central England temperature record. Blankets take time, but scarves could be quicker so I might test a few of these ideas out over the next few months. Would love to hear and see more ideas, or perhaps we could organise a mass “global warming blanket” make-athon around the world and then donate them to communities in need.

And finally.
More seriously, whilst lots of the initial comments on Twitter were from climate scientists, there are also a lot from a far more diverse set of folks. I think this is a good example of how if we want to reach out, we need to explore different ways of doing so. There are only so many people who respond to graphs and charts. And if we can find something we are passionate about as a way of doing it, then all the better.

This post has also been published here.

Edited by Dasaraden Mauree


Ellie Highwood is Professor of Climate Physics in the Department of Meteorology at the University of Reading. She did a Bsc in Physics at the University of Manchester before studying for a PhD at Reading, where she has been ever since! Her research interests concern the role of atmospheric particulates (aerosol) in climate and climate change. She has led two international aircraft campaigns to measure the properties of aerosol and has been involved in many others. Research projects have considered Saharan dust, volcanoes, and aerosols from human activities. She has over 40 publications in the peer reviewed literature and a few media appearances. She also teaches introductory meteorology and climate change to undergraduates, and project management to PhD students. Previously she has been a member of RMetS Council and Education Committee, and Editor of Society News. She also writes a regular “climate scientist” column for the Weather magazine. She tweets as @EllieHighwood.

Science Communication – Brexit, Climate Change and the Bluedot Festival

Science Communication – Brexit, Climate Change and the Bluedot Festival

Earlier this summer journalists, broadcasters, writers and scientists gathered in Manchester, UK for the Third European Conference of Science Journalists (ECSJ) arranged by two prestigious organisations. Firstly, the Association of British Science Writers (ABSW) who provides support to those who write about science and technology in the UK through debates, events and awards. Secondly the European Union of Science Journalists’ Associations (EUSJA) who are responsible for representing 2,500 science journalists from 23 national associations in 20 European countries. EUSJA promotes scientific and technical communication between the international scientific community and journalists. This is mainly by organising events, workshops and by working with the European Commission in the interest of Science and Society.

The pre-conference networking event began a towering twenty-three floors above the ground at the highest Champagne bar in Manchester. Here the delegates were introduced to the notion of Manchester as a hub for science with a fly-by video of the ‘science quarter’. This stems from the library at the heart of the city and fans outwards to the south like a segment in an orange. This segment engulfs an impressive two universities, several hospitals and a science park.

Looking towards the south west of the city from "Cloud 23", the bar on the 23rd floor of the Beetham Tower, Manchester, UK. Image credit: David Dixon

Looking towards the south west of the city from “Cloud 23”, the bar on the 23rd floor of the Beetham Tower, Manchester, UK. Image credit: David Dixon

The following morning the conference began at the Manchester Conference Complex in the city-centre focussing on contemporary issues in science journalism and skills for professional development. The panellists and chairs were a mixture of academics and journalists, a range of nationalities and, with experience of the field of science communication. They were allowed to discuss a topic amongst themselves before the conversation was opened to all-comers.

The opening plenary began a discussion on how independent Europe’s science news is and how it can become hijacked by vested interests. For example, if a person writes a scientific story for a newspaper, are they biased by being paid by an external company to write that story? There was a consensus for openness in the funding process behind news stories so that the merits of the story lie in how impartial the writer is perceived to be as well as the content itself.

Although the second session of the day was focussed on the reporting of EU funded science (an 80 billion Euro question) it also gave delegates a chance to mention the elephant in the room – the Brexit. This is the will of a spectrum of people representing the whole political horseshoe for the UK to leave the EU. There was a feeling from the panel that the EU funding structure has allowed science to work on projects that are not just commercially viable in the short-term. The large benefit of cross-country collaboration in Europe was stressed repeatedly. This related to the general acknowledgement that in research the country of origin of specific researchers becomes irrelevant. These thoughts played into the much discussed post-Brexit question(s) – Will it be harder in time for the UK to access EU science funding given its determination to curb net migration and what exemptions (from the UK and EU) can be made for experts in their field? It was also discussed how the UK’s pot of EU science funding may be allowed to be divided up amongst other EU countries in the future. The discussion ended with a series of brief historical anecdotes of governments who favoured local policies/competition which have had a tendency to derail international collaboration. The main point being that research continued but the job was made harder.

The next two sessions focussed on starting a new publication and pitching your idea to an existing publication (sales idea). Using the case-studies of a range of recent start-up publications it was decided that what matters most for creating a publication is: focus, editorial quality, being online, design, collaboration and content.

The closing plenary was concerned with how to work for media that are sceptical about climate change – a place where a science communicator may be forced to go along with the editorial line against their own conviction. A conviction shared by the majority of scientists world-wide who say climate change is happening but argue over the rate that it is occurring. A good recommendation was not to preach but to state facts. The dangers of saying something sceptical as a way into the topic were debated. It was thought that this may backfire if the sceptical comment is later quoted as an expert opinion. At the end of the session the delegates pondered that if the building blocks of climate change were first conceived in 1896 how it is amazing to think that this topic is still controversial 120 years later!

For the final event of the day, the delegates made their way to the Bluedot festival at Jodrell Bank – a festival of music, science, arts, technology, culture, food and film in the shadow of the Lovell Telescope which was illuminated by Brian Eno using large scale projections to create a visual installation. Here the ABSW Science Writers’ Awards was hosted and this blog was short-listed for an award. It provided another great opportunity to network at this thought-provoking conference.

Final

 

An unlikely choice between a gasoline or diesel car…

An unlikely choice between a gasoline or diesel car…

I have recently been confronted with the choice of buying a “new” car and this has proved to be a very tedious task with all the diversity of car that exists on the market today. However, one of my primary concerns was, of course, to find the least polluting car based on my usage (roughly 15000km/year).

Cars (or I should say motor vehicles) pollution is one of the major sources of air pollution (particulate matter, soot, NOx, …) in urban areas. These often cause, during both winter and summer seasons, long and prolonged exposition to ozone or PMs which can have significant effect on the health of urban population. Besides, vehicles are also one of the most important sources of greenhouse gases emissions (around 30%). Extensive research in various areas (air pollution and monitoring in urban areas, efficiency of motor vehicles, mobility and public transportation, urban planning,…) are thus being conducted to help reduce the exposition to dangerous pollutants and emissions of GHG.

Manufacturers have been more and more constrained by new regulations to decrease the pollutant emissions (with EURO6 norm now in the EU) and the increase the efficiency of motor vehicles. Governments around the world and more particularly in Europe, after the financial crisis of 2007/2008 have introduced new subsidies to incite people to buy new more energy efficient vehicles. One of the main issues here is that often the more efficient vehicles are not necessarily the less polluting vehicles. Policies have been based on GHG emissions from vehicle consumption without consideration of the full life cycle cost and analysis and also on other pollutants emissions.

Thus if we take for example an electric car, the GHG emissions (and also other pollutants) are pretty low or close to zero as there are none released by the car itself. But we also need to evaluate the emissions from the electricity power plant (most likely to be a centralized one based on either fossil fuel or nuclear energy). Furthermore if the life cycle cost of the battery in such cars, are taken into consideration, the picture is not so black and white anymore as it has been pointed out by numerous studies (ADEME – sorry for the French link!). Besides electric vehicle remain quite expensive and not really adapted to all usage.

If we compare both diesel and gasoline cars, then it becomes a bit more tedious. Diesel engines consumes generally less than gasoline one. However their PM emissions, for example, can be quite high and hence they need really efficient filters to get rid of these pollutants. More stringent regulations have forced manufacturers to improve significantly the quality of the air coming out of their diesel engines but still remain on average more polluting than gasoline cars. Countries, like France, that have strongly subsidized the use of diesel in the past, are now finding it quite difficult to phase out these types of cars. And besides they are more efficient and hence emits lets GHG.

Coming back to my choice of cars then… The choice for me in the end was then between the long term or short term benefits. Using a gasoline car or an electric car (in a country where the energy is coming from renewables!) would be more ecologically sound if we drive mostly in urban areas. However if you are thinking about the long term benefits (with climate change) then you should probably opt for a more efficient diesel car.

All of this, points out that research still need to be conducted and new innovative ideas are really needed (like Elon Musk’s battery, maybe?) so as to bridge the enormous gap between having an efficient car, the life cycle analysis and living in a pollution-free urban environment. But of course…, the best solution is to use public transport or bikes… well this is not always possible!