AS
Atmospheric Sciences

Atmospheric Science

The puzzle of high Arctic aerosols

The puzzle of high Arctic aerosols

Current Position: 86°24’ N, 13°29’E (17th September 2018)

The Arctic Ocean 2018 Expedition drifted for 33 days in the high Arctic and is now heading back south to Tromsø, Norway. With continuous aerosol observations, we hope to be able to add new pieces to the high Arctic aerosol puzzle to create a more complete picture that can help us to improve our understanding of the surface energy budget in the region.

Cruise track to the high Arctic with the 33 day drift period. (Credits: Ian Brooks)

In recent years, considerable efforts have been undertaken to study Arctic aerosol. However, there are many facets to Arctic aerosol so that different kinds of study designs are necessary to capture the full picture. Just to name a few efforts, during the International Polar Year in 2008, flight campaigns over the North American and western European Arctic studied the northward transport of pollution plumes in spring and summer time [1,2,3]. More survey-oriented flights (PAMARCMIP) have been carried out over several years and seasons [4] around the western Arctic coasts. The NETCARE campaigns [5] have studied summertime Canadian Arctic aerosol in the marginal ice zone. And the Arctic Monitoring and Assessment Programme (AMAP) has issued reports on the radiative forcing of anthropogenic aerosol in the Arctic [6,7].

These and many other studies have advanced our understanding of Arctic aerosol substantially. Since the 1950s we are aware of the Arctic Haze phenomenon that describes the accumulation of air pollution in the Arctic emitted from high latitude sources during winter and early spring. In these seasons, the Arctic atmosphere is very stratified, air masses are trapped under the so-called polar dome and atmospheric cleansing processes are minimal. In springtime, with sunlight, when the Arctic atmosphere becomes more dynamic, the Arctic Haze dissolves with air mass movement and precipitation. Then, long-range transport from the mid-latitudes can be a source of Arctic aerosol. This includes anthropogenic as well as forest fire emissions. The latest AMAP assessment report [6] has estimated that the direct radiative forcing of current global black and organic carbon as well as sulfur emissions leads to a total Arctic equilibrium surface temperature response of 0.35 °C. While black carbon has a warming effect, organic carbon and particulate sulfate cool. Hence, over the past decades the reductions in sulfur emissions from Europe and North America have led to less cooling from air pollution in the Arctic [8]. Currently, much effort is invested in understanding new Arctic emission sources that might contribute to the black carbon burden in the future, for example from oil and gas facilities or shipping [9, 10, 11].

These studies contribute to a more thorough understanding of direct radiative effects from anthropogenic aerosol and fire emissions transported to the Arctic. However, neither long-range transported aerosol nor emissions within the lower Arctic contribute substantially to the aerosol found in the boundary layer of the high Arctic [12]. These particles are emitted in locations with warmer temperatures and these air masses travel north along isentropes that rise in altitude the further north they go. The high Arctic boundary layer aerosol, however, is important because it modulates the radiative properties of the persistent Arctic low-level clouds that are decisive for the surface energy budget (see first Arctic Ocean blog in August 2018).

Currently, knowledge about sources and properties of high Arctic aerosol as well as their interactions with clouds is very limited, mainly because observations in the high Arctic are very rare. In principle, there are four main processes that shape the aerosol population in the high north: a) primary sea spray aerosol production from open water areas including open leads in the pack ice area, b) new particle formation, c) horizontal and vertical transport of natural and anthropogenic particles, and d) resuspension of particles from the snow and ice surface (snowflakes, frost flowers etc.). From previous studies, especially in the marginal ice zone and land-based Arctic observatories, we know that microbial emissions of dimethyl sulfide and volatile organic compounds are an important source of secondary aerosol species such as particulate sulfate or organics [13]. The marginal ice zone has also been identified as potential source region for new particle formation [14]. What is not known is to which degree these particles are transported further north. Several scavenging processes can occur during transport. These include coagulation of smaller particles to form larger particles, loss of smaller particles during cloud processing, precipitation of particles that acted as cloud condensation nuclei or ice nucleating particles, or sedimentation of large particles to the surface.

Further north in the pack ice, the biological activity is thought to be different compared to the marginal ice zone, because it is limited by the availability of nutrients and light under the ice. Hence, local natural emissions in the high Arctic are expected to be lower. Similarly, since open water areas are smaller, the contribution of primary marine aerosol is expected to be lower. In addition, the sources of compounds for new particle formation that far north are not very well researched.

To understand some of these sources and their relevance to cloud properties, an international team is currently measuring the aerosol chemical and microphysical properties in detail during the Arctic Ocean 2018 expedition on board the Swedish icebreaker Oden. It is the fifth expedition in a series of high Arctic field campaigns on the same icebreaker. Previous campaigns took place in 1991, 1996, 2001 and 2008 (see refs [15, 16, 17, 18] and references therein).

The picture below describes the various types of air inlets and cloud probes that are used to sample ambient aerosol particles and cloud droplets or ice crystals. A large suite of instrumentation is used to determine in high detail the particle number concentrations and size distribution of particles in the diameter range between 2 nm and 20 µm. Several aerosol mass spectrometers help us to identify the chemical composition of particles between 15 nm and 1 µm as well as the clusters and ions that contribute to new particle formation. Filter samples of particles smaller than 10 µm will allow a detailed determination of chemical components of coarse particles. They will also give a visual impression of the nature of particles through electron microscopy. Filter samples are also used for the determination of ice nucleation particles at different temperatures. Cloud condensation nuclei counters provide information on the ability of particles to form cloud droplets. A multi-parameter bioaerosol spectrometer measures the number, shape and fluorescence of particles. Further instruments such as black carbon and gas monitors help us to distinguish pristine air masses from long-range pollution transport as well as from the influence of the ship exhaust. We can distinguish and characterize the particle populations that do or do not influence low-level Arctic clouds and fogs in detail by using three different inlets: i) a total inlet, which samples all aerosol particles and cloud droplets/ice crystals, ii) an interstitial inlet, which selectively samples particles that do not form droplets when we are situated inside fog or clouds, and iii) a counterflow virtual impactor inlet (CVI), which samples only cloud droplets or ice crystals (neglecting non-activated aerosol particles). The cloud droplets or ice crystals sampled by the CVI inlet are then dried and thus only the cloud residuals (or cloud condensation nuclei) are characterized in the laboratory situated below.

Inlet and cloud probe set-up for aerosol and droplet measurements installed on the 4th deck on board the icebreaker Oden. From left to right: Inlets for particulate matter smaller 1 µm (PM1) and smaller 10 µm (PM10); forward scattering spectrometer probe (FSSP) for droplet size distribution measurements; counterflow virtual impactor inlet (CVI) for sampling cloud droplets and ice crystals; total inlet for sampling of all aerosol particles and cloud droplets/ice crystals; interstitial inlet for sampling non-activated particles; particle volume monitor (PVM) for the determination of cloud liquid water content and effective droplet radius. Newly formed , very small, particles are sampled with a different inlet (not shown in the picture) specifically designed to minimize diffusion losses. (Picture credit: Paul Zieger)

To gain more knowledge about the chemical composition and ice nucleating activity of particles in clouds, we also collect cloud and fog water on the uppermost deck of the ship and from clouds further aloft by using tethered balloon systems. When doing vertical profiles with two tethered balloons, also particle number concentration and size distribution information are obtained to understand in how far the boundary layer aerosol is mixed with the cloud level aerosol. Furthermore, a floating aerosol chamber is operated at an open lead near the ship to measure the fluxes of particles from the water to the atmosphere. It is still unknown whether open leads are a significant source of particles. For more details on the general set-up of the expedition see the first two blogs of the Arctic Ocean Expedition (here and here).

After 33 days of continuous measurements while drifting with the ice floe and after having experienced the partial freeze-up of the melt ponds and open water areas, it is now time for the expedition to head back south. We will use two stations in the marginal ice zone during the transit into and out of the pack ice as benchmarks for Arctic aerosol characteristics south of our 5-week ice floe station.

As Oden is working her way back through the ice and the expedition comes to an end, we recapitulate what we have measured in the past weeks. What was striking, especially for those who have spent already several summers in the pack ice, is that this time the weather was very variable. There were hardly two days in row with stable conditions. Instead, one low pressure system after the other passed over us, skies changed from bright blue to pale grey, calm winds to storms… On average, we have experienced the same number of days with fog, clouds and sunshine as previous expeditions, but the rhythm was clearly different. From an aerosol perspective these conditions meant that we were able to sample a wide variety characteristics including new particle formation, absence of cloud condensation nuclei with total number concentrations as low as 2 particles per cubic centimeter, coarse mode particles, and size distributions with a Hoppel-minimum that is typical for cloud processed particles.

Coming back home, we can hardly await to fully exploit our recorded datasets. Stay tuned!

Do not hesitate to contact us for any question regarding the expedition and measurements. Check out this blog for more details of life during the expedition and our project website which is part of the Arctic Ocean 2018 expedition.

Changing Arctic landscapes. From top to bottom: Upon arrival at the drift station there were many open leads. Storms pushed the floes together and partially closed leads. Mild and misty weather. Cold days and sunshine lead to freeze-up. (Credit: Julia Schmale)

Edited by Dasaraden Mauree


The authors from left to right: Andrea Baccarini, Julia Schmale, Paul Zieger

Julia Schmale is a scientist in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. She has been involved in Arctic aerosol research for the past 10 years.

Andrea Baccarini, is doing his PhD in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. He specializes in new particle formation in polar regions.

Paul Zieger, is an Assistant Professor in Atmospheric Sciences at the University of Stockholm, Sweden. He is specialized in experimental techniques for studying atmospheric aerosols and clouds at high latitudes.

The perfect ice floe

The perfect ice floe

Current position: 89°31.85 N, 62°0.45 E, drifting with a multi-year ice floe (24th August 2018)

With a little more than three weeks into the Arctic Ocean 2018 Expedition, the team has found the right ice floe and settled down to routine operations.

Finding the perfect ice floe for an interdisciplinary science cruise is not an easy task. The Arctic Ocean 2018 Expedition aims to understand the linkages between the sea, microbial life, the chemical composition of the lower atmosphere and clouds (see previous blog entry) in the high Arctic. This means that the “perfect floe” needs to serve a multitude of scientific activities that involve sampling from open water, drilling ice cores, setting up a meteorological tower, installing balloons, driving a remotely operated vehicle, measuring fluxes from open leads and sampling air uncontaminated from the expedition activities. The floe hence needs to be composed of multi-year ice, be thick enough to carry all installations but not too thick to allow for drilling through it. There should also be an open lead large enough for floating platforms and the shape of the floe needs to be such that the icebreaker can be moored against it on the port or starboard side facing all for cardinal directions depending on where the wind is coming from.

The search for the ice floe actually turned out to be more challenging than expected. The tricky task was not only to find a floe that would satisfy all scientific needs, but getting to it north of 89°N proved exceptionally difficult this year. After passing the marginal ice zone north of Svalbard, see blue line on the track (Figure 2), progress through the first year ice was relatively easy. Advancing with roughly 6 knots, that is about 12 km/h, we advanced quickly. After a couple of days however, the ice became unexpectedly thick with up to three meters. This made progress difficult and slow, even for Oden with her 24,500 horse powers. In such conditions the strategy is to send a helicopter ahead to scout for a convenient route through cracks and thinner ice. However, persistent fog kept the pilot from taking off which meant for the expedition to sit and wait in the same spot. For us aerosol scientists looking at aerosol-cloud interactions this was a welcome occasion to get hand on the first exciting data. In the meantime, strong winds from the east pushed the pack ice together even harder, producing ridges that are hard to overcome with the ship. But with a bit of patience and improved weather conditions, we progressed northwards keeping our eyes open for the floe.

Figure 2: Cruise track with drift. The light red line indicates the track to the ice floe, the dark red line indicates the drift with the floe. The thin blue line is the marginal ice zone from the beginning of August.

As it happened, we met unsurmountable ice conditions at 89°54’ N, 38°32’ E, just about 12 km from the North Pole – reason enough to celebrate the farthest North.

Figure 3: Expedition picture at the North Pole. (Credit: SPRS)

Going back South from there it just took a bit more than a day with helicopter flights and good visibility until we finally found ice conditions featuring multiple floes.

And here we are. After a week of intense mobilization on the floe, the four sites on the ice and the instrumentation on the ship are now in full operation and routine, if you stretch the meaning of the term a bit, has taken over. A normal day looks approximately like this:

7:45:  breakfast, meteorological briefing, information about plan of the day; 8:30 – 9:00: heavy lifting of material from the ship to the ice floe with the crane; 9:00 (or later): weather permitting, teams go to the their sites, CTDs are casted from the ship if the aft is not covered by ice; 11:45: lunch for all on board and pick-nick on the floe; 17:30: end of day activities on the ice, lifting of the gangway to prevent polar bear visits on the ship; 17:45: dinner; evening: science meetings, data crunching, lab work or recreation.

Figure 4: Sites on the floe, nearby the ship. (Credit: Mario Hoppmann)

At the balloon site, about 200 m from the ship, one balloon and one heli-kite are lifted alternately to take profiles of radiation, basic meteorological variables and aerosol concentrations. Other instruments are lifted up to sit over hours in and above clouds to sample cloud water and ice nucleating particles, respectively. At the met alley, a 15 m tall mast carries radiation and flux instrumentation to characterize heat fluxes in the boundary layer. The red tent at the remotely operated vehicle (ROV) site houses a pool through which the ROV dives under the flow to measure physical properties of the water. The longest walk, about 20 minutes, is to the open lead site, where a catamaran takes sea surface micro layer samples, a floating platform observes aerosol production and cameras image underwater bubbles. The ice core drilling team visits different sites on the floe to take samples for microbial and halogen analyses.

Open Lead site. (Credit: Julia Schmale)

Importantly, all activities on the ice need to be accompanied by bear guards. Everybody carries a radio and needs to report when they go off the ship and come back. If the visibility decreases, all need to come in for safety reasons. Lab work and continuous measurements on the ship happen throughout the day and night. More details on the ship-based aerosol laboratory follow in the next contribution.

Edited by Dasaraden Mauree


Julia Schmale is an atmospheric scientist at the Paul Scherrer Institute in Switzerland. Her research focuses on aerosol-cloud interactions in extreme environments. She is a member of the Atmosphere Working Group of the International Arctic Science Committee and a member of the Arctic Monitoring and Assessment Programme Expert Group on Short-lived Climate Forcers .

Into the mist of studying the mystery of Arctic low level clouds

Into the mist of studying the mystery of Arctic low level clouds

This post is the first of a “live-series of blog post” that will be written by Julia Schmale while she is participating in the Arctic Ocean 2018 expedition.

Low level Arctic clouds are still a mystery to the atmospheric science community. To understand their role the present and future Arctic climate, the Arctic Ocean 2018 Expedition is currently under way with an international group of scientists to study the ocean, lower atmosphere, clouds and aerosols.

Low level clouds in the high Arctic influence the energy budget of the region and they hence play an important role for the Arctic climate. The Arctic is warming about twice as fast as the global average, a phenomenon called Arctic amplification. The role of clouds for climate is linked to their interaction with solar radiation. They reflect short-wave radiation, thereby sending energy back to space and cooling the surface. In the case of longwave radiation, clouds reflect it back to the surface which causes a greenhouse effect that is warming the surface. The top of the clouds cools during this process, which makes air parcels surrounding the top cool as well and sink to the surface. These air masses are replaced by warmer surface air which rises. This can cause a well-mixed Arctic boundary layer. Most of the time, however, the cloud level is decoupled from the surface due to temperature inversions. This is possible when clouds are thin. In this case, clouds cannot feed on the water vapor from the surface and they might dissipate. Interaction of clouds with short-wave radiation in the summer is most of the time less important than their interaction with long-wave radiation. This is because the cloud albedo is similar to the sea ice albedo. Hence clouds do not have a strong cooling effect. However, as summer sea ice retreats and the surface gets darker, clouds may contribute to surface cooling in the future.

The overall radiative properties of clouds are further influenced by the phase of the clouds. Arctic summer clouds are normally mixed-phased, that is liquid droplets co-exist with ice crystals. Usually, ice and liquid water do not co-exist, because the ice crystals grow at the expense of liquid droplets that evaporate (because the saturation water vapor pressure is higher of liquid droplets than ice crystals). However, in simple words, in the summertime Arctic, when mixing of air masses occurs, liquid droplets form in rising air parcels that sustain the liquid layer at the bottom of the cloud which in turn feeds the ice crystal growth.

As cloud droplets and ice crystals only form on cloud condensation nuclei (CCN) and ice nucleating particles (INP), the whole complexity described above, depends on the presence of aerosol particles. But the central Arctic Ocean has an extremely limited supply of CCN and INP. Potential sources include locally produced or long-range transported particles. Long-range transport of particles – or precursor gases that form particles – to the high Arctic in the free troposphere can contribute to the number of CCN and INP. However, in the summer Arctic atmosphere precipitation is frequent and particles can be washed out along their way north. Regional transport of trace gases such as dimethyl sulfide (DMS), which is emitted from phytoplankton blooms in the marginal ice zone, can contribute to the CCN after atmospheric oxidation. Local sources in the high Arctic are however, extremely limited. Open leads, those are areas of open water which form as the sea ice is moving, can produce particles through bubble bursting. These bursting bubbles expel material such as sea salt and organic particles contained in the surface water into the air from where they might be transported to the cloud level. Another conceivable source of particles is new particle formation. This means that particles are freshly formed purely from gases. This process and the chemical nature and sources of the gases are however poorly understood.

To shed light on how cloud formation works in the summer time high Arctic and how this might change in the future with changing climatic conditions the Arctic Ocean 2018 Expedition is designed to investigate physical, chemical and biological processes from the water column to the free troposphere. The graphic below provides a schematic of the planned activities.

Arctic ocean setup by Paul Zieger

 

On 1 August, we left Longyearbyen. After a 24 hour station in the marginal ice zone, we are now heading towards the North Pole area where we look for a stable multi-year ice floe against which the ship will be moored for several weeks to drift along. This strategy will give us the opportunity for detailed process studies. In the upcoming blog contributions, several of these process studies will be featured.

Further links:
Expedition website:
https://polarforskningsportalen.se/en/arctic/expeditions/arctic-ocean-2018
Arctic ocean blog of the Paul Scherrer Institute:
https://www.psi.ch/lac/arctic-ocean-blog
Stockholm University Expedition Webpage:
https://www.aces.su.se/research/projects/microbiology-ocean-cloud-coupling-in-the-high-arctic-moccha/

Edited by Dasaraden Mauree


Julia Schmale is an atmospheric scientist at the Paul Scherrer Institute in Switzerland. Her research focuses on aerosol-cloud interactions in extreme environments. She is a member of the Atmosphere Working Group of the International Arctic Science Committee and a member of the Arctic Monitoring and Assessment Programme Expert Group on Short-lived Climate Forcers .

Buckle up! Its about to get bumpy on the plane.

Buckle up! Its about to get bumpy on the plane.

Clear-Air Turbulence (CAT) is a major hazard to the aviation industry. If you have ever been on a plane you have probably heard the pilots warn that clear-air turbulence could occur at any time so always wear your seatbelt. Most people will have experienced it for themselves and wanted to grip their seat. However, severe turbulence capable of causing serious passengers injuries is rare. It is defined as the vertical motion of the aircraft being strong enough to force anyone not seat belted to leave the chair or floor if they are standing. In the United States alone, it costs over 200 million US dollars in compensation for injuries, with people being hospitalised with broken bones and head injuries. Besides passengers suffering serious injuries, the cabin crew are most vulnerable as they spend most of the time on their feet serving customers. This results in an additional cost if they are injured and unable to work.

Clear-air turbulence is defined as high altitude inflight bumpiness away from thunderstorm activity. It can appear out of nowhere at any time and is particularly dangerous because pilots can’t see or detect it using on-board instruments.  Usually the first time a pilot is aware of the turbulence is when they are already flying through it. Because it is a major hazard, we need to know how it might change in the future, so that the industry can prepare if necessary. This could be done by trying to improve forecasts so that pilots can avoid regions likely to contain severe turbulence or making sure the aircraft can withstand more frequent and severe turbulence.

Our new paper published in Geophysical Research Letters named ‘Global Response of Clear-Air Turbulence to Climate Change’ aims at understanding how clear-air turbulence will change in the future around the world and throughout the year. What our study found was that, the busiest flight routes around the world would see the largest increase in turbulence. For example, the North Atlantic, North America, North Pacific and Europe (see Figure 1) will see a significant increase in severe turbulence which could cause more problems in the future. These regions see the largest increase because of the Jet Stream. The Jet Stream is a fast flowing river of air that is found in the mid-latitudes. Clear-air turbulence is predominantly caused by the wind traveling at different speeds around the Jet Stream. Climate change is expected to increase the Jet Stream speed and therefore increase the vertical wind shear, causing more turbulence.

To put these findings in context, severe turbulence in the future will be as frequent as moderate turbulence historically. Anyone who is a frequent flyer will have likely experienced moderate turbulence at some point, but fewer people have experienced severe turbulence. Therefore, this study suggests this will change in the future with most frequent flyers experiencing severe turbulence on some flight routes as well as even more moderate turbulence. Our study also found moderate turbulence will become as frequent in the summer as it has done historically in winter. This is significant because although clear-air turbulence is more likely in winter, it will however now become much more of a year round phenomenon (see Figure 2).

Figure 2: Seasonal variation in turbulence intensity.

 

This increase in clear-air turbulence highlights the importance for improving turbulence forecasting. Current research has shown that using ensemble forecasts (many forecasts of the same event) and also using more turbulence diagnostics than the one we used in this study can improve the forecast skill. By improving the forecasts, we could consistently avoid the areas of severe turbulence or make sure passengers and crew are seat-belted before the turbulence event occurs. Unfortunately, as these improvements are not yet fully operational, you can still reduce your own risk of injury by making sure you wear your seat belt as much as possible so that, if the aircraft does hit unexpected turbulence, you would avoid serious injuries.


This blog has been prepared by Luke Storer (@LukeNStorer), Department of Meteorology, University of Reading, Reading, UK and edited by Dasaraden Mauree (@D_Mauree). 

Volcanic Ash Particles Hold Clues to Their History and Effects

Volcanic Ash Particles Hold Clues to Their History and Effects
Volcanic Ash as an Active Agent in the Earth System (VA3): Combining Models and Experiments; Hamburg, Germany, 12–13 September 2016

Volcanic ash is a spectacular companion of volcanic activity that carries valuable information about the subsurface processes. It also poses a range of severe hazards to public health, infrastructure, aviation, and agriculture, and it plays a significant role in biogeochemical cycles.

Scientists can examine ash particles from volcanic eruptions for clues to the history of their journey from the lithosphere (Earth’s crust and upper mantle) to atmosphere, hydrosphere, and biosphere (Figure 1). These tephra particles are less than 2 millimeters in diameter, and they record most of the history on or near their surfaces. Understanding the physicochemical properties of the ash particle surfaces is essential to deciphering the underlying volcanic and atmospheric processes and to predicting the widespread effects and hazards posed by these small particles. This has been extensively investigated recently but several fundamental questions remain open.

Figure 1: Particle surface properties strongly affect the life cycle and effects of volcanic ash particles within the Earth system (Credit: G. Hoshyaripour).

For example, ash surface generation and alteration through processes occurring during eruption (e.g., fragmentation and recycling) and after eruption (e.g., aggregation, cloud chemistry, and microphysics) are not yet quantitatively well understood and thus are not fully implemented in the models. Therefore, gaps remain in our understanding of the volcanic and atmospheric life cycle of the ash and how this life cycle is linked to the ash’s surface properties and environmental effects. This limitation hinders the reliable estimation of far-field airborne ash concentrations, a central factor in assessing the ash hazard for aviation.

Addressing the challenges in volcanic ash surface characterization requires close collaboration of experts in laboratory experiments, in situ measurements, space-based observations, and numerical modeling to co-develop reliable assessment tools for both fundamental research and operational purposes. These actions should involve specialists from geochemistry, geology, volcanology, and atmospheric sciences to combine the advanced experimental and observational data on rate parameters of physicochemical processes and ash surface characteristics with state-of-the-art atmospheric models that incorporate aerosol chemistry, microphysics, and interactions among ash particles, clouds, and solar radiation in local to global scales.

As the first step in this direction, a joint European Geophysical Union (EGU) and American Geophysical Union (AGU) session on volcanic ash is organized in the upcoming general assembly and fall meeting entitled: Volcanic Ash—Generation, Transport, Impacts, and Applications. The next steps should include 1) initiation a collaborative network with two working groups on the physical and geochemical life cycles of volcanic ash; 2) development an integrated modeling, observational, and experimental data compilation on mid- to large-intensity eruptions to assist with benchmark modeling.

These actions should be linked to the existing activities within the International Association of Volcanology and Chemistry of the Earth’s Interior, EGU, and AGU.

The workshop was supported by the excellence cluster CliSAP (DFG EXE 177).

This blog post has been originally prepared as a meeting report referring to a workshop in Hamburg, Germany, sponsored by the excellence cluster CliSAP (DFG EXE 177).


This blog has been prepared by Ali Hoshyaripour (@Hoshyaripour – email: ali.hoshyaripour@kit.edu), Institute of Meteorology and Climate Research, Karlsruhe Institute of Technology, Germany and edited by Dasaraden Mauree (@D_Mauree). 

How can we use meteorological models to improve building energy simulations?

How can we use meteorological models to improve building energy simulations?

Climate change is calling for various and multiple approaches in the adaptation of cities and mitigation of the coming changes. Because buildings (residential and commercial) are responsible of about 40% of energy consumption, it is necessary to build more energy efficient ones, to decrease their contribution to greenhouse gas emissions.

But what is the relation with the atmosphere. It is two folds: firstly, in a previous post, I have already described what is the impact of the buildings / obstacles on the air flow and on the air temperature. Secondly, the fact that the climate or surrounding environment is influenced, there will be a significant change in the energy consumption of these buildings.  Currently, building energy simulation tool are using data usually gathered outside of the city and hence not representative of the local context. Thus it is crucial to be able to have necessary tools that capture both the dynamics of the atmosphere and also those of a building to design better and more sustainable urban areas.

In the present work, we have brought these two disciplines together by developing a multi-scale model. On the one side, a meteorological model, the Canopy Interface Model (CIM), was developed to obtain high resolution vertical profile of wind speed, direction and air temperature. On the other hand, an energy modelling tool, CitySim, is used to evaluate the energy use of the buildings as well as the irradiation reaching the buildings.

With this coupling methodology setup we have applied it on the EPFL campus, in Switzerland.  We have compared the modelling results with data collected on the EPFL campus for the year 2015. The results show that the coupling lead to a computation of the meteorological variables that are in very good agreement. However, we noted that for the wind speed at 2m, there is still some underestimation of the wind speed. One of the reason for this is that the wind speed close to the ground is very low and there is a higher variability at this height.

Comparison of the wind speed (left) and air temperature (right) at 2m (top) and 12m (bottom).

We intend to improve this by developing new parameterization in the future for the wind speed in an urban context by using data currently being acquired in the framework of the MoTUS project. One surprising result from this part of the study, is the appearance inside of an urban setup of a phenomena call Cold Air Pools which is very typical of valleys. The reason for this is the lack of irradiation reaching the surface inside of dense urban parts.

Furthermore, we have seen some interesting behaviour in the campus for some particular buildings such as the Rolex Learning Center. Buildings with different forms and configuration, reacted very differently with the local and standard dataset. We designed a series of additional simulation using multiple building configuration and conducted a sensitivity analysis in order to define which parameters between the wind speed and the air temperature had a more significant impact on the heating demand (see Figure 1). We showed that the impact of a reduction of 1°C was more important than a reduction of 1m s-1.

Figure 1. Heating demand of the five selected urban configurations (black dots), as function of the variation by +1°C (red dots) and -1°C (blue dots) of the air temperature, and by +1.5 m s-1 (violet dots) and -1.5 m s-1 (orange dots).

Finally, we also analysed the energy consumption of the whole EPFL campus. When using standard data, the difference between the simulated and measured demand was around 15%. If localized weather data was used, the difference was decreased to 8%. We have thus been able to reduce the uncertainty of the data by 2. The use of local data can hence improve the estimation of building energy use and will hence be quite important when building become more and more efficient.

Reference / datasets

The paper (Mauree et al., Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale, PLOS One, 2017) can be accessed here and data needed to reproduce the experiment are also available on Zenodo.

The art of turning climate change science to a crochet blanket

The art of turning climate change science to a crochet blanket

We welcome a new guest post from Prof. Ellie Highwood on why she made a global warming blanket and how you could the same!

What do you get when you cross crochet and climate science?
A lot of attention on Twitter.
At the weekend I like to crochet. Last weekend I finished my latest project and posted the picture on Twitter. And then had to turn the notifications off because it all went a bit noisy. The picture of my “global warming blanket” rapidly became my top tweet ever, with more retweets and likes than anything else. Apparently I had found a creative way to visualise trends in global mean temperature. I particularly liked the “this is the most frightening knitwear I have seen all year” comment. Given the interest on Twitter I thought I had better answer a few of the questions in a blog post. Also, it would be great if global warming blankets appeared all over the world.

How did you get the idea?
The global warming blanket was based on “temperature” blankets made by crocheters around the world. Their blankets consist of one row, or square, of crochet each day, coloured according to the temperature at their location. They look amazing and show both the annual cycle and day-to-day variability. Other people make “sky” blankets where the colours are based on the sky colour of the day – this results in a more muted grey-blue-white colour palette.
I wondered what the global temperature series would look like as a blanket. Also, global warming is often explained as greenhouse gases acting like a blanket, trapping infrared radiation and keeping the Earth warm. So that seemed like an interesting link. I also had done several rainbow themed blankets in the past and had a lot of yarn left that needed using.

Where did the data come from?

I used the annual and global mean temperature anomaly compared to 1900-2000 mean as a reference period as available from NOAA. This is what the data looks like shown more conventionally.

Global temperature anomalies (source: NOAA)

I then devised a colour scale using 15 different colours each representing a 0.1 °C data interval. So everything between 0 and 0.099 was in one colour for example. Making a code for these colours, the time series can be rewritten as in the table below. It is up to the creator to then choose the colours to match this scale, and indeed which years to include. I was making a baby sized blanket so chose the last 100 years, 1916-2016.

Because of these choices, and the long reference period, much of the blanket has relatively muted colour differences that tend to emphasise the last 20 years or so. There are other data sets available, and other reference periods and it would be interesting to see what they looked like. Also the colours I used were determined mainly by what I had available; if I were to do another one, I might change a few around (dark pink looks too much like red in the photograph and needed a darker blue instead of purple for the coldest colour), or even use a completely different colour palette – especially as rainbow colour scales aren’t great as they can distort data and render it meaningless if you are colour blind. Ed Hawkins kindly provided me with a more user friendly colour scale which I love and may well turn into a scarf for myself (much quicker than a blanket!).

#endrainbow colour scale (from E. Hawkins)

How can I recreate this?
If you want to create something similar, you will need 15 different colours if you want to do the whole 1850-2016 period. You will need relatively more yarn in colours 3-7 than other colours (if, like me you are using your stash). You can use any stitch or pattern but since you want the colour changes to be the focus of the blanket, I would choose something relatively simple. I used rows of treble crochet (UK terms) and my 100 years ended up being about 90 cm by 110 cm. You can of course choose any width you like for your blanket, or make a scarf by doing a much shorter foundation row. It goes without saying that it could also be knitted. Or painted. Or woven. Or, whatever your particular craft is.

If you look closely (check out arrows on the figure at the top) you can see the 1997-1998 El Nino (relatively warm yellow stripe amongst the pink – in this photo the dark pink looks red – I might change this colour if I did it again), 1991/92 Pinatubo eruption (relatively cool pink year) as well as cool periods 1929, and 1954-56 and the relatively warm 1940-46. Remember that these are global temperature anomalies and may not match your own personal experience at a given location!

Table with the colour codes used to make the global warming blanket

How long did it take?
I used a very simple stitch, so for a blanket this size, it was a couple of months (note I only crochet in the evenings 2 or 3 evenings a week for a couple of hours with more at some weekends). It helped that the Champions League was on during this time as other members of the household were happy to sit around watching football whilst I crocheted. Weave the ends in as you go. There are a lot of them, and I had to do them all at the end. The time flies because….

Why do I crochet?
I like crochet because you can do simple projects whilst thinking about other things, watching TV or listening to podcasts, or, you can do more complicated things which require your full attention and divert your brain from all other things. There is also something meditative about crochet, as has been discussed here. I find it a good way to destress. Additionally, a lot of what I make is for gifts or for charities and that is a really good feeling.

What’s next?
Suggestions have come in for other time series blankets e.g. greys for aerosol optical depth punctuated by red for volcanic eruptions, oranges and yellows punctuated by black for solar cycle (black being high sun spot years), a central England temperature record. Blankets take time, but scarves could be quicker so I might test a few of these ideas out over the next few months. Would love to hear and see more ideas, or perhaps we could organise a mass “global warming blanket” make-athon around the world and then donate them to communities in need.

And finally.
More seriously, whilst lots of the initial comments on Twitter were from climate scientists, there are also a lot from a far more diverse set of folks. I think this is a good example of how if we want to reach out, we need to explore different ways of doing so. There are only so many people who respond to graphs and charts. And if we can find something we are passionate about as a way of doing it, then all the better.

This post has also been published here.

Edited by Dasaraden Mauree


Ellie Highwood is Professor of Climate Physics in the Department of Meteorology at the University of Reading. She did a Bsc in Physics at the University of Manchester before studying for a PhD at Reading, where she has been ever since! Her research interests concern the role of atmospheric particulates (aerosol) in climate and climate change. She has led two international aircraft campaigns to measure the properties of aerosol and has been involved in many others. Research projects have considered Saharan dust, volcanoes, and aerosols from human activities. She has over 40 publications in the peer reviewed literature and a few media appearances. She also teaches introductory meteorology and climate change to undergraduates, and project management to PhD students. Previously she has been a member of RMetS Council and Education Committee, and Editor of Society News. She also writes a regular “climate scientist” column for the Weather magazine. She tweets as @EllieHighwood.

What? Ice lollies falling from the sky?

What? Ice lollies falling from the sky?

You have more than probably eaten many lollipops as a kid (and you might still enjoy them. The good thing is that you do not necessarily need to go to the candy shop to get them but you can simply wait for them to fall from the sky and eat them for free. Disclaimer: this kind of lollies might be slightly different from what you expect…


Are lollies really falling from the sky?

Eight years ago (in January 2009), a low-pressure weather system coming from the North Atlantic Ocean reached the UK and brought several rain events to the country. Nothing is really special about this phenomenon in Western Europe in the winter. However, a research flight started sampling the clouds in the warm front (transition zone where warm air replaces cold air) ahead of the low-pressure system and discovered hydrometeors (precipitation products, such as rain and snow) of an unusual kind. Researchers named them ‘ice lollies’ due to their characteristic shape and maybe due to their gluttony. The microphysical probes onboard the aircraft, combined with a radar system located in Southern England, allowed them to measure a wide range of hydrometeors, including these ice lollies that were observed for the first time with such concentration levels.

How do ice lollies form?

A recent study (Keppas et al, 2017) explains that ice lollies form when water droplets (size of 0.1 to 0.7 mm) collide with ice crystals with the form of a column (size of 0.25 to 1.4 mm) and freeze on top of them (see Fig. 2).

Fig 2: Formation of an ice lolly: water droplet (the circle) collides with an ice crystal (the column) [Credit: Fig. 1a from Keppas et al., (2017)].

Such ice lollies form in ‘mixed-phase clouds’, i.e. clouds made of water droplets and ice crystals and whose temperature is below the freezing point (0°C). At these temperatures, water droplets can be supercooled, meaning that they stay liquid below the freezing point.

Figure 3 below shows the processes and particles involved in the formation of ice lollies. Ice lollies are mainly found at temperatures between 0 and -6°C, in the vicinity of the warm conveyor belt, which represents the main source of warm moist air that feeds the low-pressure system. This warm conveyor belt brings water vapour that participates in the formation and growth of supercooled water droplets. Ice crystals formed near the cloud tops fall through the warm conveyor belt and collide with the water droplets to form ice lollies.

Fig 3: Processes involved with the formation of ice lollies, which mainly form under the warm conveyor belt [Credit: Fig 4 from Keppas et al., (2017)].

Are these ice lollies important?

Ice lollies were observed more recently (September 2016) during another aircraft mission over the northeast Atlantic Ocean but no radar coverage supported the observations. At the moment of writing this article, the lack of observations prevent us from determining the importance of these ice lollies in the climate system. However, future missions would provide more insight. In the meantime, we suggest you to enjoy a lollipop such as the one shown in the image of this week 🙂

This is a joint post, published together with the Cryospheric division blog, given the interdisciplinarity of the topic.

Edited by Sophie Berger and Dasaraden Mauree

Reference/Further reading

Keppas, S. Ch., J. Crosier, T. W. Choularton, and K. N. Bower (2017), Ice lollies: An ice particle generated in supercooled conveyor belts, Geophys. Res. Lett., 44, doi:10.1002/2017GL073441

 


DavidDavid Docquier is a post-doctoral researcher at the Earth and Life Institute of Université catholique de Louvain (UCL) in Belgium. He works on the development of processed-based sea-ice metrics in order to improve the evaluation of global climate models (GCMs). His study is embedded within the EU Horizon 2020 PRIMAVERA project, which aims at developing a new generation of high-resolution GCMs to better represent the climate.

Black Carbon: the dark side of warming in the Arctic

Black Carbon: the dark side of warming in the Arctic

When it comes to global warming, greenhouse gases – and more specifically CO2 – are the most often pointed out. Fewer people know however that tiny atmospheric particles called ‘black carbon’ also contribute to the current warming. This post presents a paper my colleague and I recently published in Nature Communications . Our study sheds more light into the chemical make-up of black carbon, passing through the Arctic.


Black Carbon warms the climate

 Figure 1: Global radiative forcing of CO2 (green) compared to black carbon (blue). The colored bars show the mean change in radiative forcing due to the concentration of CO2 and BC in the atmosphere. The estimated range for the expected radiative forcing is everything between the white lines, which show the 90% confidence interval. (Data according to Boucher et al. 2013 (IPCC 5th AR) and Bond et al. 2013). [Credit: Patrik Winiger]

Figure 1: Global radiative forcing of CO2 (green) compared to black carbon (blue). The colored bars show the mean change in radiative forcing due to the concentration of CO2 and BC in the atmosphere. The estimated range for the expected radiative forcing is everything between the white lines, which show the 90% confidence interval. (Data according to Boucher et al. 2013 (IPCC 5th AR) and Bond et al. 2013). [Credit: Patrik Winiger]

Black Carbon (BC) originates from incomplete combustion caused by either natural (e.g., wild fires) or human (e.g., diesel car emissions) activities. As the name suggests, BC is a dark particle which absorbs sunlight very efficiently. In scientific terms we call this a strong positive radiative forcing, which means that the presence of BC in the atmosphere is helping to heat the planet. Some estimates put its radiative forcing in second place, only after CO2 (Figure 1). The significant thing about BC is that it has a short atmospheric lifetime (days to weeks), meaning we could quickly avoid some climate warming by getting rid of its emissions. Currently global emissions are increasing year by year and on snow and ice, the dark particles have a longer lasting effect due to the freeze and thaw cycle, where BC can re-surface, before it is washed away. It is important however to note, that our main focus on emission reduction should target (fossil-fuel) CO2 emissions, because they will affect the climate long after (several centuries) they have been emitted.

Arctic amplification: strongest warming in the North Pole

The Arctic is warming faster than the rest of our planet. Back in 1896, the Swede Arrhenius, (better known for his works: in chemistry), calculated, that a change in atmospheric CO2 – which at that time was a good 100 ppm lower than today – would change the temperature at higher latitudes (towards the poles) more than at lower latitudes.

Figure 2: Observation based global surface temperature anomalies for Jan-Mar (2016) in °C with respect to a 1961-1990 base year. Credit: GISTEMP Team, 2016: GISS Surface Temperature Analysis (GISTEMP). NASA Goddard Institute for Space Studies. Dataset accessed 2016-10-15 at http://data.giss.nasa.gov/gistemp/ [Hansen et al., 2010].

Figure 2: Surface temperature anomalies (in °C) for Jan-Mar (2016) with respect to a 1961-1990 baseline. [ Credit: NASA — GISTEMP (accessed 2016-10-15) and Hansen et al., 2010].

The problem with his calculations – as accurate and impressive they might have been – was, that he ignored the earth’s geography and seemed unaware of the big heat capacity of the oceans. On the southern half of our planet there is a lot more water, which can take up more heat, as compared to the northern half with more land surface. Thus, in reality the latitudes on the southern hemisphere have not heated as much as their northern counterparts and this effect came to be known as Arctic amplification.

Dark particles on bright snow and ice

Figure 3: Welcome to the Greenland Ice Sheet everybody. Probably an extreme case of ice covered in cryoconite, captured in August 2014 [Credit: Jason Box, (LINK: http://darksnow.org/)].

Figure 3: Ice covered in cryoconite, Greenland Ice Sheet, in August 2014 [Credit: Jason Box — Dark Snow project].

Greenhouse gases and BC are not the only reasons for the increase in temperature change and earlier onset of the melting season in the Arctic. Besides BC, there are other ‘light absorbing impurities’ such as dust, microorganisms, or a mixture of all of the above, better known as cryoconite. They all absorb solar radiation and thus decrease the albedo – the amount of solar energy reflected back to space – of the underlying white surface. This starts a vicious cycle by which these impurities melt the snow or ice and eventually uncover the usually much darker surface (e.g., rock or open sea water), leading to more solar absorption and the cycle continues. The effect and composition of these impurities are currently intensively studied on the Greenland ice sheet (check out the Black and Bloom, as well as the Dark Snow projects).

Black Carbon effect on climate is highly uncertain

One of the reasons for the high uncertainty of BC’s climate effects is the big range in effects it has (see white line on Figure 1), when it interacts with snow and ice (or clouds and the atmosphere).

Another source of uncertainty is probably the big estimated range in the global, and especially in the regional emissions of BC in the Arctic. For example, the emission inventory we work with (ECLIPSE), is based on international and national statistics that indicate how much of a certain fuel (diesel, coal, gas, wood, etc.) is used, and in which way it is used (vehicle sizes, machine type and age, operating conditions, etc.). These numbers can vary a lot. If we, for example, line up different emission inventories of man-made emissions (Figure 4), by comparing the two different fractions of BC (fossil fuels vs. biomass burning) at different latitudes, then we see that the closer we get to the North pole, the more these emission inventories disagree. And this is still ignoring atmospheric transport or emissions of natural sources, such as wildfires.

Computer models, necessary to calculate global climate change, are partly based on input from these emission inventories. Models used for the calculation of the transport of these tiny particles have vastly improved in recent years, but still struggle at accurately mimicking the seasonality or extent of the observed BC concentrations. To some extent this is also due to the range of parametrization in the model, mainly the lifetime of BC, including its removal from the atmosphere by wet scavenging (e.g., rain). So to better understand black carbon effects on climate, more model calculations are necessary, for which the emission inventory estimates need to be verified by observations.

Figure 4: Fraction biomass burning of BC (fbb) at different latitudes North, estimated by three different emission inventories. The green line shows the GAINS emission inventory, which was the precursor to the ECLIPSE inventory (Klimont et al. 2016) [Credit: Patrik Winiger]

Figure 4: Fraction biomass burning of BC (fbb) at different latitudes North, from three different emission inventories. The green line shows the GAINS emission inventory, which was the precursor to the ECLIPSE inventory (Klimont et al. 2016) [Credit: Patrik Winiger]

How do we trace the origin of black carbon?

This is where the science of my colleagues and me comes in. By looking at BC’s isotopic ratio of stable-carbon (12C/13C) and its radiocarbon (14C) content we were able to deduce information about the combustion sources (Figure 5).

Plants (trees) take up contemporary radiocarbon, naturally present in the atmosphere, by photosynthesis of atmospheric CO2. All living organisms have thus more or less the same relative amount of radiocarbon atoms, we talk of a similar isotopic fingerprint. BC from biomass (wood) burning thereby has a contemporary radiocarbon fingerprint.

When they die, organisms stop incorporating contemporary carbon and the radiocarbon atoms are left to decay. Radiocarbon atoms have a relative short (at least on geological time-scales) half-life of 5730 years, which means that fossils and consequentially BC from fossil fuels are completely depleted of radiocarbon. This is how the measured radiocarbon content of a BC sample gives us information on the relative contributions of fossil fuels vs. biomass burning.

The stable carbon isotopic ratio gives information on the type of combustion sources (liquid fossil fuels, coal, gas flaring or biomass burning). Depending on how a certain material is formed (e.g., geological formation of coal), it has a specific isotopic ratio (of 12C/13C), like a fingerprint. Sometimes isotopic fingerprints can be altered during transport (because of chemical reactions or physical processes like condensation and evaporation). However, BC particles are very resistant to reactions and change only very little. Hence, we expect to see the same fingerprints at the observation site and at the source, only that the isotopic signal at the observation site will be a mixture of different source fingerprints.

Figure 5/ Carbon isotopic signatures of different BC sources, summarized by E.N. Kirillova (2013).

Figure 5: Carbon isotopic signatures of different BC sources, summarized by E.N. Kirillova (2013). To give information about the isotopic fingerprint, the delta-notation is used (small delta for 12C/13C, and big delta for 12C/14C). The isotopic values show how much a certain sample is different, on a per mil scale, from an international agreed isotopic standard value (or ratio) for carbon isotopes. [Credit: fig 1 from Kirillova (2013)]

Where does the black carbon in European Arctic come from?

In our study (Winiger et al, 2016), we observed the concentrations and isotopic sources of tiny particles in airborne BC for over a year, in the European Arctic (Abisko, Sweden), and eventually compared these observations to model results, using the freely available atmospheric transport model FLEXPART and emission inventories for natural and man-made BC emissions.

Seeing our results we were first of all surprised at how well the model agreed with our observations. We saw a clear seasonality of the BC concentrations, like it has been reported in the literature before, and the model was able to reproduce this. Elevated concentrations were found in the winter, which is sometimes referred to as Arctic haze. The combustion sources showed a strong seasonality as well. The radiocarbon data showed, that fossil fuel combustion dominated in the winter and (wood) biomass burning during the low BC-burden periods in the summer. With a combination of the stable isotope fingerprints and Bayesian statistics we further concluded, that the major fossil fuel emissions came from liquid fossil fuels (most likely diesel). The model predicted a vast majority of all these BC emissions to be of European origin. Hence, we concluded, that the European emissions in the model had to be well constrained and the model parametrization of BC lifetime and wet-scavenging had to be fairly accurate for the observed region and period. Our hope is now that our work will be implemented in future models of BC effects and taken into account for future BC mitigation scenarios.

Figure 6: This is an example from the model calculations, showing where the (man-made) BC came from in January 2012. Abisko's position is marked as a blue star. The darker (red) spots show sources of higher BC contribution. This winter example was among the three highest observed (in terms of BC concentration) and the sources were ~50% wood burning, ~20% liquid fossil fuels (diesel) and ~30% coal. Some of the darkest spots can clearly be attributed to European cities.

Figure 6: Example from the model calculations, showing where the (man-made) BC came from in January 2012. Abisko’s position is marked as a blue star. The darker (red) spots show sources of higher BC contribution. This winter example was among the three highest observed (in terms of BC concentration) and the sources were ~50% wood burning, ~20% liquid fossil fuels (diesel) and ~30% coal. Some of the darkest spots can clearly be attributed to European cities. [Credit: fig4b from Winiger et al (2016)]

References

  • Anderson, T. R., E. Hawkins, and P. D. Jones (2016), CO2, the greenhouse effect and global warming: from the pioneering work of Arrhenius and Callendar to today’s Earth System Models, Endeavour, in press, doi:10.1016/j.endeavour.2016.07.002.
  • Arrhenius, S. (1896), On the influence of carbonic acid in the air upon the temperature of the ground., Philos. Mag. J. Sci., 41(August), 239–276, doi:10.1080/14786449608620846.
  • Hansen, J., R. Ruedy, M. Sato, and K. Lo (2010), Global surface temperature change, Rev. Geophys., 48(4), RG4004, doi:10.1029/2010RG000345.
  • Klimont, Z., Kupiainen, K., Heyes, C., Purohit, P., Cofala, J., Rafaj, P., Borken-Kleefeld, J., and Schöpp, W.: Global anthropogenic emissions of particulate matter including black carbon, Atmos. Chem. Phys. Discuss., doi:10.5194/acp-2016-880, in review, 2016.
  • Kirillova, Elena N. “Dual isotope (13C-14C) Studies of Water-Soluble Organic Carbon (WSOC) Aerosols in South and East Asia.” (2013). ISBN 978-91-7447-696-5 pp. 1-37
  • Winiger, P., Andersson, A., Eckhardt, S., Stohl, A., & Gustafsson, Ö. (2016). The sources of atmospheric black carbon at a European gateway to the Arctic. Nature Communications, 7.

Edited by Sophie Berger, Dasaraden Mauree and Emma Smith
This is joint post with the Cryospheric Division , given the interdisciplinarity of the topic featured.


portraitPatrik Winiger is a PhD student at the Department of Environmental Science and Analytical Chemistry and the Bolin Centre for Climate Research, at Stockholm University. His research interest focuses on impact and mitigation of Short Lived Climate Pollutants and anthropogenic CO2 emissions. Currently he investigates the sources of black carbon aerosols in the Arctic. He tweets as @PatrikWiniger

 

Science Communication – Brexit, Climate Change and the Bluedot Festival

Science Communication – Brexit, Climate Change and the Bluedot Festival

Earlier this summer journalists, broadcasters, writers and scientists gathered in Manchester, UK for the Third European Conference of Science Journalists (ECSJ) arranged by two prestigious organisations. Firstly, the Association of British Science Writers (ABSW) who provides support to those who write about science and technology in the UK through debates, events and awards. Secondly the European Union of Science Journalists’ Associations (EUSJA) who are responsible for representing 2,500 science journalists from 23 national associations in 20 European countries. EUSJA promotes scientific and technical communication between the international scientific community and journalists. This is mainly by organising events, workshops and by working with the European Commission in the interest of Science and Society.

The pre-conference networking event began a towering twenty-three floors above the ground at the highest Champagne bar in Manchester. Here the delegates were introduced to the notion of Manchester as a hub for science with a fly-by video of the ‘science quarter’. This stems from the library at the heart of the city and fans outwards to the south like a segment in an orange. This segment engulfs an impressive two universities, several hospitals and a science park.

Looking towards the south west of the city from "Cloud 23", the bar on the 23rd floor of the Beetham Tower, Manchester, UK. Image credit: David Dixon

Looking towards the south west of the city from “Cloud 23”, the bar on the 23rd floor of the Beetham Tower, Manchester, UK. Image credit: David Dixon

The following morning the conference began at the Manchester Conference Complex in the city-centre focussing on contemporary issues in science journalism and skills for professional development. The panellists and chairs were a mixture of academics and journalists, a range of nationalities and, with experience of the field of science communication. They were allowed to discuss a topic amongst themselves before the conversation was opened to all-comers.

The opening plenary began a discussion on how independent Europe’s science news is and how it can become hijacked by vested interests. For example, if a person writes a scientific story for a newspaper, are they biased by being paid by an external company to write that story? There was a consensus for openness in the funding process behind news stories so that the merits of the story lie in how impartial the writer is perceived to be as well as the content itself.

Although the second session of the day was focussed on the reporting of EU funded science (an 80 billion Euro question) it also gave delegates a chance to mention the elephant in the room – the Brexit. This is the will of a spectrum of people representing the whole political horseshoe for the UK to leave the EU. There was a feeling from the panel that the EU funding structure has allowed science to work on projects that are not just commercially viable in the short-term. The large benefit of cross-country collaboration in Europe was stressed repeatedly. This related to the general acknowledgement that in research the country of origin of specific researchers becomes irrelevant. These thoughts played into the much discussed post-Brexit question(s) – Will it be harder in time for the UK to access EU science funding given its determination to curb net migration and what exemptions (from the UK and EU) can be made for experts in their field? It was also discussed how the UK’s pot of EU science funding may be allowed to be divided up amongst other EU countries in the future. The discussion ended with a series of brief historical anecdotes of governments who favoured local policies/competition which have had a tendency to derail international collaboration. The main point being that research continued but the job was made harder.

The next two sessions focussed on starting a new publication and pitching your idea to an existing publication (sales idea). Using the case-studies of a range of recent start-up publications it was decided that what matters most for creating a publication is: focus, editorial quality, being online, design, collaboration and content.

The closing plenary was concerned with how to work for media that are sceptical about climate change – a place where a science communicator may be forced to go along with the editorial line against their own conviction. A conviction shared by the majority of scientists world-wide who say climate change is happening but argue over the rate that it is occurring. A good recommendation was not to preach but to state facts. The dangers of saying something sceptical as a way into the topic were debated. It was thought that this may backfire if the sceptical comment is later quoted as an expert opinion. At the end of the session the delegates pondered that if the building blocks of climate change were first conceived in 1896 how it is amazing to think that this topic is still controversial 120 years later!

For the final event of the day, the delegates made their way to the Bluedot festival at Jodrell Bank – a festival of music, science, arts, technology, culture, food and film in the shadow of the Lovell Telescope which was illuminated by Brian Eno using large scale projections to create a visual installation. Here the ABSW Science Writers’ Awards was hosted and this blog was short-listed for an award. It provided another great opportunity to network at this thought-provoking conference.

Final