AS
Atmospheric Sciences
Dasaraden Mauree

Guest

Dasaraden Mauree is a scientist at the EPFL. He has a Ph. D. in Earth and Universe Sciences from Université de Strasbourg. His uses computer models for his research work which is now focused on urban climate, building energy simulation, energy efficiency at the urban scale and urban energy systems. He still has a strong interest in climate modeling particularly with respect to land use changes. He strongly believes in open research and in supporting outreach programmes. Dasaraden tweets at @D_Mauree.

A simple model of convection to study the atmospheric surface layer

A simple model of convection to study the atmospheric surface layer

Since being immortalised in Hollywood film, “the butterfly effect” has become a commonplace concept, despite its obscure origins. Its name derives from an object known as the Lorenz attractor, which has the form of a pair of butterfly wings (Fig. 1). It is a portrait of chaos, the underlying principle hindering long-term weather prediction: just a small change in initial conditions leads to vastly different outcomes in the long run.


Figure 1: The Lorenz attractor.

The three-equation system that gives rise to the Lorenz attractor is often referred to as a simple model of atmospheric convection, yet amongst the atmospheric science community, attention is rarely paid to the original fluid flow that the Lorenz equations describe. Consisting of a fluid layer heated from below and cooled from above, Rayleigh-Bénard convection (Fig. 2) is a hallmark flow beloved by fluid dynamicists and mathematicians alike for its analytical tractability, yet rich behaviour. It is often cited as being of immediate relevance for many geophysical and astrophysical flows [1]. The success of turbulent Rayleigh-Bénard convection in leading to our understanding of chaos, as exemplified by the Lorenz attractor, suggests the enticing possibility of gaining other key conceptual insights into the behaviour of the Earth’s atmosphere through the use of this simple convective system.

 

Figure 2: Schematic of Rayleigh-Bénard convection.

In a recent study [2] we explored this potential by investigating to what extent turbulent Rayleigh-Bénard convection serves as an analogue of the daytime atmospheric boundary layer, also known as the convective boundary layer (CBL). In particular, we investigated whether statistical properties in the surface layer develop with height in a similar way in both systems. The surface layer is typically just a few tens of metres thick, but due to the strong turbulent mixing that takes place there, it is of primary importance for the development of the boundary layer. The surface boundary conditions of Rayleigh-Bénard convection and the CBL are the same, which might lead one to think that surface-layer properties should behave similarly in both cases. However, differences in the upper boundary conditions between the two systems modify the large-scale circulations that appear in both systems and this may have an impact in the surface layer.

Indeed, despite the much-heralded relevance of Rayleigh-Bénard convection to geophysical flows, we find that its cooled upper plate modifies the large-scale structures in such a way that it substantially alters the behaviour of near-surface properties compared to the CBL. In particular, the downdrafts in Rayleigh-Bénard convection are considerably stronger than in the CBL and their impingement into the surface layer changes how velocity and temperature statistics develop with height.

However, we also find that just an incremental change to the upper boundary condition of Rayleigh-Bénard convection is needed to closely match surface-layer statistics in the CBL. If instead of being cooled, the upper plate is made adiabatic, i.e. no heat is allowed to escape (Fig. 3), the influence of the strong, cold downdrafts is removed, resulting in surface-layer similarity between this modified version of Rayleigh-Bénard convection and the CBL. Rayleigh-Bénard convection with an adiabatic top lid has the advantage that it is a simpler experimental set-up than the CBL and provides a longer statistically steady state, allowing for greater statistical convergence to be achieved through long-time averaging.

Figure 3: Schematic of the modifed version of Rayleigh-Bénard convection with an adiabatic top lid.

In the long term, the classical Rayleigh-Bénard system will continue to serve as a paradigm for studies of natural convection, though we are increasingly beginning to see that its practical application to geophysical and astrophysical [3] flows may not be as straightforward as past literature seems to suggest.

 References

[1] A. Pandey, J. Scheel, and J. Schumacher.  Turbulent superstructures in Rayleigh-Bénard convection.Nature Communications, 9:2118, 2018.

[2] K.  Fodor,  JP  Mellado,  and  M.  Wilczek.   On the  Role  of  Large-Scale Updrafts and Downdrafts in Deviations From Monin-Obukhov Similarity Theory  in  Free  Convection. Boundary-Layer Meteorology,  2019.

[3] F. Wilczynski, D. Hughes, S. Van Loo, W. Arter, and F. Militello.  Stability  of  scrape-off  layer  plasma:  a  modified  Rayleigh-Bénard  problem. Physics of Plasmas, 26:022510, 2019.

Edited by Dasaraden Mauree


 Bettina DialloKatherine Fodor is a PhD. candidate at the Max Planck Institute for Meteorology in Hamburg, Germany. She uses very high resolution computer simulations to study turbulence in the atmosphere. In particular, her research concerns interactions between large-scale structures and small-scale turbulence. You can find her on Twitter @FodorKatherine where, in addition to science, she also tweets about cycling.”

 

 

A brighter future for the Arctic

A brighter future for the Arctic

This is a follow-up from a previous publication. Recently, a new analysis of the impact of Black Carbon in the Arctic was conducted within a European Union Action.

“Difficulty in evaluating, or even discerning, a particular landscape is related to the distance a culture has traveled from its own ancestral landscape. As temperate-zone people, we have long been ill-disposed toward deserts and expanses of tundra and ice. They have been wastelands for us; historically we have not cared at all what happened in them or to them. I am inclined to think, however, that this landscape is able to expose in startling ways the complacency of our thoughts about land in general. Its unfamiliar rhythms point up the narrow impetuosity of Western schedules, by simply changing the basis of the length of the day. And the periodically frozen Arctic Ocean is at present an insurmountable impediment to timely shipping. This land, for some, is irritatingly and uncharacteristically uncooperative.”

            -Barry Lopez, Arctic Dreams, 1986


Study

Back in the 1980s the Arctic was a different place. It is one of the fastest changing regions of our planet and since then, Arctic sea ice volume has more than halved (Figure 1). Our study took place in the 2010s, when the Arctic moved into a new regime and sea ice volume showed unprecedented lows. In the last 3 years since our study ended this decline has just continued.

Figure 1: Satellite era (1979-2018) Arctic ice data. LEFT: Arctic sea ice volume, in 1000 km3 RIGHT: Normalized arctic sea ice extent. (Credit: Zachary Labe, Department of Earth System Science, The University of California, Irvine @ZLabe)

During 4 years, we collected small airborne particles at 5 different sites around the Arctic; 1 to 2 years per site. Later, we measured  the concentrations and isotopic sources of black carbon (BC) aerosols, a product of incomplete combustion of biomass and fossil fuels, and a subfraction of the total collected aerosol.

All living organisms have more or less the same relative amount of radiocarbon atoms. We call it a similar ‘isotopic fingerprint’. Through photosynthesis plants take up CO2. About 1 in 1012 CO2 molecules contains the naturally occurring (but unstable) radiocarbon atom (14C), which is formed high up in the atmosphere through solar radiation. Black carbon from biomass burning thereby has a contemporary radiocarbon fingerprint. When plants die, the radiocarbon atoms are left to decay, and no new radiocarbon is being built into the plant. Radiocarbon’s half-life is 5730 years, which means that fossils and consequentially soot from fossil fuels is completely depleted of radiocarbon.

For the same periods and sites of our observations (see Figure 2), we also simulated black carbon concentration and sources.

Figure 2: Observational sites of our study. Clockwise from top: Utqiaġvik (formerly known as Barrow, Alaska), Tiksi Observatory (Siberia), Zeppelin Observatory (Svalbard), Abisko (Sweden), and Alert (Canada).

This was done with an atmospheric transport model (FLEXPART), using emission inventory data for fossil and biofuels (ECLIPSE), and biomass burning (GFED) (see Figure 3). Emission inventories like ECLIPSE calculate emissions of air pollutants and greenhouse gases in a consistent framework. They rely on international and national statistics of the  amount of consumed energy sources for e.g., energy use, industrial production, and agricultural activities. GFED uses MODIS satellite measurements of daily burnt area. This is used – together with ’emission factors’ (i.e. amount of emitted species per consumed energy source unit) – to calculated emissions of several different gas and particle species. The details and methodologies have also been described in a previous EGU ASxCR blogpost.

Figure 3: Model setup used in our study. Anthropogenic emissions of BC are from IIASA’s ECLIPSE emission inventory and biomass burning (wild fire and agricultural fires) are from the Global Fire Emission Database (GFED). The atmospheric transport model itself is FLEXPART, developed by NILU in Norway.

Black carbon, a short live climate pollutant (SLCP), is the second or more likely third largest warming agent in the atmosphere after the greenhouse gases carbon dioxide and methane. Unlike the two gases, it is less clear how big the net warming effect of BC is. There are several open questions, that lead to the current uncertainty: 1. How much BC is exactly put into the atmosphere? 2. How long does it stay in the air and where is it located? 3. Where from and where to is it transported, and where and when is it deposited? 4. How does it affect the earths radiative balance by darkening snow and ice, and most importantly of all: how does it interact with clouds. We have a fair understanding of all these processes, but still, relatively large uncertainties remain to be resolved. Depending on how much BC is in the air and where it is located in the atmosphere, it can have different effects (e.g., strong warming, warming, or even cooling). And all these things need to be measured, and simulated correctly by computer models.

Current multi-model best estimates by the Arctic Monitoring and Assessment Programme say that BC leads to increases of Arctic surface temperature of 0.6°C (0.4°C from BC in the atmosphere and 0.2°C from BC in snow) based on their radiative forcing (see Figure 4).

Figure 3: Radiative forcing for all greenhouse gases (GHG), carbon dioxide (CO2), methane (CH4), and black carbon (BC). All numbers are for global estimates, except the last bar to the right, which is for the Arctic only. Data according to the IPCC 5th Assessment Report (2013), an extensive review on black carbon (Bond et al. 2013), and best estimates by the Arctic Monitoring Assessment Programme (AMAP 2015). Range of uncertainties (if available) are shown as white vertical line.

It is important however to note, that our main focus on emission reduction should target (fossil-fuel) CO2 emissions, because they will affect the climate long after (several centuries) they have been emitted. And reduction in these sources means reduction in soot as well, since soot is also a combustion product. Reduction that targets soot specifically can be achieved by installation of particulate filters (retrofitting of old engines and stringent standard for new vehicles), shifting to cleaner fuels, burning techniques, or introduction and enforcement of inspection and maintenance programs to assure compliance with already existing legislation.

It is recognized internationally that for effective implementation of the Paris Agreement (mitigation effort to hold the average global temperature well below 2°C relative to the preindustrial levels), mitigation measures of short live climate pollutants (such as BC and methane) need to be considered. As the Arctic environment is more sensitive to climate change, knowing exactly which origins (source types and regions) are contributing to black carbon in this part of the world is important for effective mitigation measures.

Source attributions of black carbon depend on the altitude where the aerosol is located at the time of measurement or modelling. Wildfires are known to contribute more at higher elevations during the fire seasons (Paris et al., 2009) than at the Arctic surface. Several of the global chemical models have already approximately predicted the proportion of source influence but their accuracy depends on the emissions input and performance of the model. Part of this problem is, that these models get input from emission inventories. These inventories tell the model where, when and how much black carbon is emitted, kind of like instructions from a cookbook.

But the different cook books don’t agree on the amount of black carbon that goes into our annual black carbon cake. Additionally, all the different cookbooks have different recipes for different years. If we take a best estimate of global black carbon emissions, our annual cake  has about the size (and weight; because of similar densities of limestone/granite and soot) of the great pyramid of Giza (7500 gigagrams). But the range of estimates vary immensely (2000-29000 gigagrams) (see Figure 5). And these numbers are only for man-made emissions (fossil fuels and biofuels) i.e. excluding wildfires and natural biomass burning. A recent multi-model analysis puts global annual BC fire emissions between 1000 and 6000 gigagrams. To correct these models and the emission inventories we rely on observational data to validate the model results.

Figure 5: Uncertainty in global annual BC emissions ranges from 2000 to 29000 Gg (according to Bond et al. 2013

The model set-up we used did really well in simulating soot concentrations. A bit less well in simulating fuel types (sources) – better for fossil fuels than biofuels and biomass burning. The model simulated that 90% of BC emissions (by mass) – reaching surface level – in the Arctic originated form countries north of 42°N.

In our isotope measurements, we found that black carbon sources had a strong seasonality, with high contributions of fossil fuels to black carbon in the winter (75%) and moderate (60%) in the summer. Black carbon concentrations where roughly four times higher in winter than in summer. Concentrations of black carbon at the different stations were also relatively different from each other. These surface level (<500m above sea level) Pan-Arctic results, based on our 14C method, were not very surprising. Few individual locations, as used in our latest study, have previously been published and had similar sources(e.g., Barrett et al., 2015. Winiger, et al, 2015, 2016). However, the sources in our study were relatively uniform for all stations and almost in seasonal sync with each other (high fossil winter, low fossil summer). This could have important implications for policy related questions.

Uniform sources could mean that mitigation measures could have a stronger impact, if the right sources are tackled at the right time, to keep the Arctic from becoming a small ice floe, not large enough to stand on. There could be brighter days ahead of us.

Edited by Dasaraden Mauree


Patrik Winiger is Research Manager at the ETH Zürich and guest researcher at the Department of Earth Sciences, Vrije Universiteit Amsterdam. His research interest focuses on sources and impact of natural and anthropogenic Short Lived Climate Pollutants and Greenhouse Gases. He tweets as @PatrikWiniger.

 

 

The puzzle of high Arctic aerosols

The puzzle of high Arctic aerosols

Current Position: 86°24’ N, 13°29’E (17th September 2018)

The Arctic Ocean 2018 Expedition drifted for 33 days in the high Arctic and is now heading back south to Tromsø, Norway. With continuous aerosol observations, we hope to be able to add new pieces to the high Arctic aerosol puzzle to create a more complete picture that can help us to improve our understanding of the surface energy budget in the region.

Cruise track to the high Arctic with the 33 day drift period. (Credits: Ian Brooks)

In recent years, considerable efforts have been undertaken to study Arctic aerosol. However, there are many facets to Arctic aerosol so that different kinds of study designs are necessary to capture the full picture. Just to name a few efforts, during the International Polar Year in 2008, flight campaigns over the North American and western European Arctic studied the northward transport of pollution plumes in spring and summer time [1,2,3]. More survey-oriented flights (PAMARCMIP) have been carried out over several years and seasons [4] around the western Arctic coasts. The NETCARE campaigns [5] have studied summertime Canadian Arctic aerosol in the marginal ice zone. And the Arctic Monitoring and Assessment Programme (AMAP) has issued reports on the radiative forcing of anthropogenic aerosol in the Arctic [6,7].

These and many other studies have advanced our understanding of Arctic aerosol substantially. Since the 1950s we are aware of the Arctic Haze phenomenon that describes the accumulation of air pollution in the Arctic emitted from high latitude sources during winter and early spring. In these seasons, the Arctic atmosphere is very stratified, air masses are trapped under the so-called polar dome and atmospheric cleansing processes are minimal. In springtime, with sunlight, when the Arctic atmosphere becomes more dynamic, the Arctic Haze dissolves with air mass movement and precipitation. Then, long-range transport from the mid-latitudes can be a source of Arctic aerosol. This includes anthropogenic as well as forest fire emissions. The latest AMAP assessment report [6] has estimated that the direct radiative forcing of current global black and organic carbon as well as sulfur emissions leads to a total Arctic equilibrium surface temperature response of 0.35 °C. While black carbon has a warming effect, organic carbon and particulate sulfate cool. Hence, over the past decades the reductions in sulfur emissions from Europe and North America have led to less cooling from air pollution in the Arctic [8]. Currently, much effort is invested in understanding new Arctic emission sources that might contribute to the black carbon burden in the future, for example from oil and gas facilities or shipping [9, 10, 11].

These studies contribute to a more thorough understanding of direct radiative effects from anthropogenic aerosol and fire emissions transported to the Arctic. However, neither long-range transported aerosol nor emissions within the lower Arctic contribute substantially to the aerosol found in the boundary layer of the high Arctic [12]. These particles are emitted in locations with warmer temperatures and these air masses travel north along isentropes that rise in altitude the further north they go. The high Arctic boundary layer aerosol, however, is important because it modulates the radiative properties of the persistent Arctic low-level clouds that are decisive for the surface energy budget (see first Arctic Ocean blog in August 2018).

Currently, knowledge about sources and properties of high Arctic aerosol as well as their interactions with clouds is very limited, mainly because observations in the high Arctic are very rare. In principle, there are four main processes that shape the aerosol population in the high north: a) primary sea spray aerosol production from open water areas including open leads in the pack ice area, b) new particle formation, c) horizontal and vertical transport of natural and anthropogenic particles, and d) resuspension of particles from the snow and ice surface (snowflakes, frost flowers etc.). From previous studies, especially in the marginal ice zone and land-based Arctic observatories, we know that microbial emissions of dimethyl sulfide and volatile organic compounds are an important source of secondary aerosol species such as particulate sulfate or organics [13]. The marginal ice zone has also been identified as potential source region for new particle formation [14]. What is not known is to which degree these particles are transported further north. Several scavenging processes can occur during transport. These include coagulation of smaller particles to form larger particles, loss of smaller particles during cloud processing, precipitation of particles that acted as cloud condensation nuclei or ice nucleating particles, or sedimentation of large particles to the surface.

Further north in the pack ice, the biological activity is thought to be different compared to the marginal ice zone, because it is limited by the availability of nutrients and light under the ice. Hence, local natural emissions in the high Arctic are expected to be lower. Similarly, since open water areas are smaller, the contribution of primary marine aerosol is expected to be lower. In addition, the sources of compounds for new particle formation that far north are not very well researched.

To understand some of these sources and their relevance to cloud properties, an international team is currently measuring the aerosol chemical and microphysical properties in detail during the Arctic Ocean 2018 expedition on board the Swedish icebreaker Oden. It is the fifth expedition in a series of high Arctic field campaigns on the same icebreaker. Previous campaigns took place in 1991, 1996, 2001 and 2008 (see refs [15, 16, 17, 18] and references therein).

The picture below describes the various types of air inlets and cloud probes that are used to sample ambient aerosol particles and cloud droplets or ice crystals. A large suite of instrumentation is used to determine in high detail the particle number concentrations and size distribution of particles in the diameter range between 2 nm and 20 µm. Several aerosol mass spectrometers help us to identify the chemical composition of particles between 15 nm and 1 µm as well as the clusters and ions that contribute to new particle formation. Filter samples of particles smaller than 10 µm will allow a detailed determination of chemical components of coarse particles. They will also give a visual impression of the nature of particles through electron microscopy. Filter samples are also used for the determination of ice nucleation particles at different temperatures. Cloud condensation nuclei counters provide information on the ability of particles to form cloud droplets. A multi-parameter bioaerosol spectrometer measures the number, shape and fluorescence of particles. Further instruments such as black carbon and gas monitors help us to distinguish pristine air masses from long-range pollution transport as well as from the influence of the ship exhaust. We can distinguish and characterize the particle populations that do or do not influence low-level Arctic clouds and fogs in detail by using three different inlets: i) a total inlet, which samples all aerosol particles and cloud droplets/ice crystals, ii) an interstitial inlet, which selectively samples particles that do not form droplets when we are situated inside fog or clouds, and iii) a counterflow virtual impactor inlet (CVI), which samples only cloud droplets or ice crystals (neglecting non-activated aerosol particles). The cloud droplets or ice crystals sampled by the CVI inlet are then dried and thus only the cloud residuals (or cloud condensation nuclei) are characterized in the laboratory situated below.

Inlet and cloud probe set-up for aerosol and droplet measurements installed on the 4th deck on board the icebreaker Oden. From left to right: Inlets for particulate matter smaller 1 µm (PM1) and smaller 10 µm (PM10); forward scattering spectrometer probe (FSSP) for droplet size distribution measurements; counterflow virtual impactor inlet (CVI) for sampling cloud droplets and ice crystals; total inlet for sampling of all aerosol particles and cloud droplets/ice crystals; interstitial inlet for sampling non-activated particles; particle volume monitor (PVM) for the determination of cloud liquid water content and effective droplet radius. Newly formed , very small, particles are sampled with a different inlet (not shown in the picture) specifically designed to minimize diffusion losses. (Picture credit: Paul Zieger)

To gain more knowledge about the chemical composition and ice nucleating activity of particles in clouds, we also collect cloud and fog water on the uppermost deck of the ship and from clouds further aloft by using tethered balloon systems. When doing vertical profiles with two tethered balloons, also particle number concentration and size distribution information are obtained to understand in how far the boundary layer aerosol is mixed with the cloud level aerosol. Furthermore, a floating aerosol chamber is operated at an open lead near the ship to measure the fluxes of particles from the water to the atmosphere. It is still unknown whether open leads are a significant source of particles. For more details on the general set-up of the expedition see the first two blogs of the Arctic Ocean Expedition (here and here).

After 33 days of continuous measurements while drifting with the ice floe and after having experienced the partial freeze-up of the melt ponds and open water areas, it is now time for the expedition to head back south. We will use two stations in the marginal ice zone during the transit into and out of the pack ice as benchmarks for Arctic aerosol characteristics south of our 5-week ice floe station.

As Oden is working her way back through the ice and the expedition comes to an end, we recapitulate what we have measured in the past weeks. What was striking, especially for those who have spent already several summers in the pack ice, is that this time the weather was very variable. There were hardly two days in row with stable conditions. Instead, one low pressure system after the other passed over us, skies changed from bright blue to pale grey, calm winds to storms… On average, we have experienced the same number of days with fog, clouds and sunshine as previous expeditions, but the rhythm was clearly different. From an aerosol perspective these conditions meant that we were able to sample a wide variety characteristics including new particle formation, absence of cloud condensation nuclei with total number concentrations as low as 2 particles per cubic centimeter, coarse mode particles, and size distributions with a Hoppel-minimum that is typical for cloud processed particles.

Coming back home, we can hardly await to fully exploit our recorded datasets. Stay tuned!

Do not hesitate to contact us for any question regarding the expedition and measurements. Check out this blog for more details of life during the expedition and our project website which is part of the Arctic Ocean 2018 expedition.

Changing Arctic landscapes. From top to bottom: Upon arrival at the drift station there were many open leads. Storms pushed the floes together and partially closed leads. Mild and misty weather. Cold days and sunshine lead to freeze-up. (Credit: Julia Schmale)

Edited by Dasaraden Mauree


The authors from left to right: Andrea Baccarini, Julia Schmale, Paul Zieger

Julia Schmale is a scientist in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. She has been involved in Arctic aerosol research for the past 10 years.

Andrea Baccarini, is doing his PhD in the Laboratory of Atmospheric Chemistry at the Paul Scherrer Institute, Switzerland. He specializes in new particle formation in polar regions.

Paul Zieger, is an Assistant Professor in Atmospheric Sciences at the University of Stockholm, Sweden. He is specialized in experimental techniques for studying atmospheric aerosols and clouds at high latitudes.

The perfect ice floe

The perfect ice floe

Current position: 89°31.85 N, 62°0.45 E, drifting with a multi-year ice floe (24th August 2018)

With a little more than three weeks into the Arctic Ocean 2018 Expedition, the team has found the right ice floe and settled down to routine operations.

Finding the perfect ice floe for an interdisciplinary science cruise is not an easy task. The Arctic Ocean 2018 Expedition aims to understand the linkages between the sea, microbial life, the chemical composition of the lower atmosphere and clouds (see previous blog entry) in the high Arctic. This means that the “perfect floe” needs to serve a multitude of scientific activities that involve sampling from open water, drilling ice cores, setting up a meteorological tower, installing balloons, driving a remotely operated vehicle, measuring fluxes from open leads and sampling air uncontaminated from the expedition activities. The floe hence needs to be composed of multi-year ice, be thick enough to carry all installations but not too thick to allow for drilling through it. There should also be an open lead large enough for floating platforms and the shape of the floe needs to be such that the icebreaker can be moored against it on the port or starboard side facing all for cardinal directions depending on where the wind is coming from.

The search for the ice floe actually turned out to be more challenging than expected. The tricky task was not only to find a floe that would satisfy all scientific needs, but getting to it north of 89°N proved exceptionally difficult this year. After passing the marginal ice zone north of Svalbard, see blue line on the track (Figure 2), progress through the first year ice was relatively easy. Advancing with roughly 6 knots, that is about 12 km/h, we advanced quickly. After a couple of days however, the ice became unexpectedly thick with up to three meters. This made progress difficult and slow, even for Oden with her 24,500 horse powers. In such conditions the strategy is to send a helicopter ahead to scout for a convenient route through cracks and thinner ice. However, persistent fog kept the pilot from taking off which meant for the expedition to sit and wait in the same spot. For us aerosol scientists looking at aerosol-cloud interactions this was a welcome occasion to get hand on the first exciting data. In the meantime, strong winds from the east pushed the pack ice together even harder, producing ridges that are hard to overcome with the ship. But with a bit of patience and improved weather conditions, we progressed northwards keeping our eyes open for the floe.

Figure 2: Cruise track with drift. The light red line indicates the track to the ice floe, the dark red line indicates the drift with the floe. The thin blue line is the marginal ice zone from the beginning of August.

As it happened, we met unsurmountable ice conditions at 89°54’ N, 38°32’ E, just about 12 km from the North Pole – reason enough to celebrate the farthest North.

Figure 3: Expedition picture at the North Pole. (Credit: SPRS)

Going back South from there it just took a bit more than a day with helicopter flights and good visibility until we finally found ice conditions featuring multiple floes.

And here we are. After a week of intense mobilization on the floe, the four sites on the ice and the instrumentation on the ship are now in full operation and routine, if you stretch the meaning of the term a bit, has taken over. A normal day looks approximately like this:

7:45:  breakfast, meteorological briefing, information about plan of the day; 8:30 – 9:00: heavy lifting of material from the ship to the ice floe with the crane; 9:00 (or later): weather permitting, teams go to the their sites, CTDs are casted from the ship if the aft is not covered by ice; 11:45: lunch for all on board and pick-nick on the floe; 17:30: end of day activities on the ice, lifting of the gangway to prevent polar bear visits on the ship; 17:45: dinner; evening: science meetings, data crunching, lab work or recreation.

Figure 4: Sites on the floe, nearby the ship. (Credit: Mario Hoppmann)

At the balloon site, about 200 m from the ship, one balloon and one heli-kite are lifted alternately to take profiles of radiation, basic meteorological variables and aerosol concentrations. Other instruments are lifted up to sit over hours in and above clouds to sample cloud water and ice nucleating particles, respectively. At the met alley, a 15 m tall mast carries radiation and flux instrumentation to characterize heat fluxes in the boundary layer. The red tent at the remotely operated vehicle (ROV) site houses a pool through which the ROV dives under the flow to measure physical properties of the water. The longest walk, about 20 minutes, is to the open lead site, where a catamaran takes sea surface micro layer samples, a floating platform observes aerosol production and cameras image underwater bubbles. The ice core drilling team visits different sites on the floe to take samples for microbial and halogen analyses.

Open Lead site. (Credit: Julia Schmale)

Importantly, all activities on the ice need to be accompanied by bear guards. Everybody carries a radio and needs to report when they go off the ship and come back. If the visibility decreases, all need to come in for safety reasons. Lab work and continuous measurements on the ship happen throughout the day and night. More details on the ship-based aerosol laboratory follow in the next contribution.

Edited by Dasaraden Mauree


Julia Schmale is an atmospheric scientist at the Paul Scherrer Institute in Switzerland. Her research focuses on aerosol-cloud interactions in extreme environments. She is a member of the Atmosphere Working Group of the International Arctic Science Committee and a member of the Arctic Monitoring and Assessment Programme Expert Group on Short-lived Climate Forcers .

How can we use meteorological models to improve building energy simulations?

How can we use meteorological models to improve building energy simulations?

Climate change is calling for various and multiple approaches in the adaptation of cities and mitigation of the coming changes. Because buildings (residential and commercial) are responsible of about 40% of energy consumption, it is necessary to build more energy efficient ones, to decrease their contribution to greenhouse gas emissions.

But what is the relation with the atmosphere. It is two folds: firstly, in a previous post, I have already described what is the impact of the buildings / obstacles on the air flow and on the air temperature. Secondly, the fact that the climate or surrounding environment is influenced, there will be a significant change in the energy consumption of these buildings.  Currently, building energy simulation tool are using data usually gathered outside of the city and hence not representative of the local context. Thus it is crucial to be able to have necessary tools that capture both the dynamics of the atmosphere and also those of a building to design better and more sustainable urban areas.

In the present work, we have brought these two disciplines together by developing a multi-scale model. On the one side, a meteorological model, the Canopy Interface Model (CIM), was developed to obtain high resolution vertical profile of wind speed, direction and air temperature. On the other hand, an energy modelling tool, CitySim, is used to evaluate the energy use of the buildings as well as the irradiation reaching the buildings.

With this coupling methodology setup we have applied it on the EPFL campus, in Switzerland.  We have compared the modelling results with data collected on the EPFL campus for the year 2015. The results show that the coupling lead to a computation of the meteorological variables that are in very good agreement. However, we noted that for the wind speed at 2m, there is still some underestimation of the wind speed. One of the reason for this is that the wind speed close to the ground is very low and there is a higher variability at this height.

Comparison of the wind speed (left) and air temperature (right) at 2m (top) and 12m (bottom).

We intend to improve this by developing new parameterization in the future for the wind speed in an urban context by using data currently being acquired in the framework of the MoTUS project. One surprising result from this part of the study, is the appearance inside of an urban setup of a phenomena call Cold Air Pools which is very typical of valleys. The reason for this is the lack of irradiation reaching the surface inside of dense urban parts.

Furthermore, we have seen some interesting behaviour in the campus for some particular buildings such as the Rolex Learning Center. Buildings with different forms and configuration, reacted very differently with the local and standard dataset. We designed a series of additional simulation using multiple building configuration and conducted a sensitivity analysis in order to define which parameters between the wind speed and the air temperature had a more significant impact on the heating demand (see Figure 1). We showed that the impact of a reduction of 1°C was more important than a reduction of 1m s-1.

Figure 1. Heating demand of the five selected urban configurations (black dots), as function of the variation by +1°C (red dots) and -1°C (blue dots) of the air temperature, and by +1.5 m s-1 (violet dots) and -1.5 m s-1 (orange dots).

Finally, we also analysed the energy consumption of the whole EPFL campus. When using standard data, the difference between the simulated and measured demand was around 15%. If localized weather data was used, the difference was decreased to 8%. We have thus been able to reduce the uncertainty of the data by 2. The use of local data can hence improve the estimation of building energy use and will hence be quite important when building become more and more efficient.

Reference / datasets

The paper (Mauree et al., Multi-scale modelling to evaluate building energy consumption at the neighbourhood scale, PLOS One, 2017) can be accessed here and data needed to reproduce the experiment are also available on Zenodo.

Do you want to establish a career in the atmospheric sciences? Interview with the Presidents of the AMS and the EGU-AS Division.

http://www.esa.int/spaceinimages/Images/2015/01/Ecosystem_Earth

Establishing a career in the atmospheric sciences can be challenging. There are many paths to take and open questions. Fortunately, those paths and questions have been thoroughly explored by members of our community and their experiences can provide guidance. In light of this, in September 2016 Ali Hoshyaripour [Early Career Scientists (ECS) representative of the European Geoscience Union’s Atmospheric Sciences division (EGU-AS)] and Monique Kuglitsch [Senior International Outreach/Communications Specialist at the American Meteorological Society (AMS)] collaborated on a virtual interview of the presidents of our organizations: Annica Ekmann (AE) and Fred Carr (FC), respectively, with questions provided by early career scientists. You can find below a summary of the different questions and answers.

On education

Red Latinoamericana de estudiantes en Ciencias Atmosféricas y Meteorología (RedLAtM) opened the interview series by asking AE and FC, “did something important mark your life?”

AE: I can’t think of any specific event that changed my life, but spending some time abroad as a Post-Doc and visiting scientist has been very important to me. Both from a professional point of view, to learn from a new environment, but also on a personal level.

FC: I didn’t have a major life-changing moment but several very important ones. In 1969, after I received my B.S. degree, 100% of college graduates were drafted into the military (this was at the height of the Vietnam War). However, because I had a slight hearing deficit, I was ineligible for military service, and I was able to begin graduate school. Had I served in the military, my career path would have been much different. Later, after receiving my PhD and while working as a post-doc for Dr. Lance Bosart at SUNY-Albany, I began applying for faculty positions. I eventually had to decide between an offer from the University of Oklahoma (OU) and waiting for another university to decide among several candidates. I chose the “bird-in-the-hand” option, and joined the School of Meteorology at OU, which at that time had only 6 full-time faculty and was housed in the oldest building on campus. Now we have over 20 faculty and are housed in the magnificent National Weather Center, so that was a fortunate decision. And, of course, getting married to my wonderful wife Meg in 1972, and the birth of my son Brett in 1985 were very positive milestones in my life.

RedLAtM followed-up by asking AE and FC, “why did you both decide to pursue a career in atmospheric sciences?”

AE: My path into science in general, and atmospheric science in particular, was not straight-forward. I’ve always liked math and started my undergraduate education in math and physics. I soon found myself a bit “lost in theory” and wanted concrete problems where I could apply the knowledge. That was how I was drawn towards a Master’s program in meteorology. I was sure my future career would be as a weather forecaster until I started working on my degree project. I really enjoyed this first experience working as a scientist; learning about the problem, developing a tool to study the problem (in my case a numerical model), analyzing the results and then summarizing and presenting the results. So after my degree project I applied for a PhD student position. Then one thing just followed after another… I really enjoy working as scientist, I like the challenges it brings and also that I constantly learn new things from students and colleagues.

FC: I developed a strong interest in atmospheric science because of my love of skiing (which I started at age 5) and subsequent love of snowstorms. I grew up on the Massachusetts coast just northeast of Boston (Beverly) which meant that we were near the rain–snow line of almost every winter storm that passed over the northeast U.S. Thus I followed the weather forecasts and snow observations extremely closely, and decided I wanted to be a meteorologist. My interest in atmospheric science has increased ever since and today, at age 69, I am still skiing and still vicariously following closely all U.S. snowstorms and snow amount reports!

RedLAtM also wanted to know from AE and FC, “how should a young person guide his/her path as student and scientist in order to reach those institutions like the ones you lead?”

AE: A solid PhD education followed by a Postdoctoral position in a good lab is of course important. But I also think it’s essential to be in an environment where you personally and professionally feel appreciated and get good support, otherwise it’s easy to lose the enthusiasm when things turn difficult—which they will at some point. After a Post-Doc, other characteristics than your scientific skills will also become more and more important; project management skills, time management skills, leadership skills, administration skills, etc. If there are opportunities to learn these skills on the way, even at a small scale, it’s a good idea to take them.

FC: My advice will be targeted toward becoming involved in the American Meteorological Society (AMS). One can begin as an undergraduate student by attending scientific meetings, especially the AMS Annual Meeting that has so many activities (Career Fair, Student Conference, Exhibit Hall, etc.) from which they can benefit. As a graduate student, one can begin giving oral and poster presentations at the many conferences/symposia the AMS sponsors every year. The AMS also has meetings that are attractive to the private sector as well (Broadcast; Weather and Forecasting; Washington Forum, etc.), so you can remain involved in the AMS no matter what your career path is. The AMS has over 100 Boards and Committees addressing a wide spectrum of issues, many of which are listed here: https://www.ametsoc.org/ams/index.cfm/about-ams/ams-commissions-boards-and-committees/complete-list-of-commissions-boards-and-committees/ and nearly all of them desire to have 1–2 student members. For early career scientists seeking more involvement in the AMS, I recommend joining one of the 30 Scientific and Technological Activities Commission (STAC) committees in the discipline that matches your interest. Also note the “Student Opportunities” link on the STAC website (https://www.ametsoc.org/stac/). The Commissions on Professional Affairs and on the Weather, Water and Climate Enterprise have many volunteer opportunities for those in the private and public sectors (see first URL above). As your career proceeds, you can become more involved in the leadership of the various commissions, boards and committees, and eventually to major leadership positions in the Society.

RedLAtM posed their next question to FC, “which universities are the best for doing postgraduate studies in Tropical/Subtropical Dynamic applied in numerical weather prediction for tropical cyclone forecasting? I’m from Mexico and we have systems coming from two basins: the Atlantic and the Northeast Pacific.”

FC: By “postgraduate studies”, I will assume that you mean research opportunities as a recent PhD recipient. First, your PhD advisor may know of post-doctoral opportunities in his/her own research group or in tropical research groups at other institutions such as the Universities of Miami or Hawaii. In the U.S., it is possible for almost every doctoral program in atmospheric sciences to have 1–2 experts in tropical meteorology, so one could look over these program for research topics in your areas of interest (the following web site provides a list of these doctoral institutions: http://ametsoc.org/amsucar_curricula/index.cfm). The National Research Council has postdoctoral research opportunities at several organizations such as NOAA, Naval Research Lab, etc. that perform tropical research; the list of these organizations is at http://nrc58.nas.edu/RAPLab10/Opportunity/Programs.aspx. Finally, I will mention NCAR’s Advanced Study Program Postdoctoral Program (at http://www.asp.ucar.edu/pdfp/pd_announcement.php) in which accepted candidates can work with any NCAR scientist they wish, a few of which do study tropical cyclones.

On career

Under the topic of career, Anonymous asked AE and FC “have you experienced ‘impostor syndrome’ and do you have advice for early career researchers who have it?”

AE: I have often felt that I’m not “good enough” and that everybody else is so much smarter and better than I. Personally, what I tended to do was to put all the good characteristics of a number of other people into one ideal person, and then I compared myself with that ideal person, which of course never was a very favorable comparison…. So I’ve stopped doing that J. From a general perspective, I think a good mentor can be very helpful.

FC: I believe I may have experienced a mild form of this syndrome when I first became a professor and I didn’t see myself at the same level as the distinguished professors at Florida State University where I was a student. However, I was getting papers published and receiving research grants, so I must have been doing something right. Eventually I learned that all scientists, being human, have their strengths and weaknesses, and that one can become a colleague of your distinguished peers by recognizing your own strengths and making contributions in these areas. My advice for those who may have impostor syndrome is to talk with people who you admire (one should always try to find mentors wherever you work), as they may have also experienced similar feelings. Also, develop a strong support system among your friends and colleagues, and look at your resume every once in a while to see all the wonderful things you have accomplished!

RedLAtM wanted to know from AE, “do you consider you had to sacrifice more than your male colleagues in order to achieve what have you done as a scientist in atmospheric sciences?”

AE: I don’t feel that I have made specific sacrifices to be a scientist. It’s been challenging sometimes, like many others I do have problems balancing work and private life. But I don’t think it’s a problem that’s unique for academia. It’s unfortunate that the role model of a successful scientist tends to be a person that works day and night, I wish we could change this image. Personally, I would like to see more research groups that do not rely upon, or are focused around, one single person.

RedLAtM wanted to know about FC’s experiences at the National Centers for Environmental (NCEP) and his experiences with models, asking “what has been your experience in the [NCEP]? I mean, what are the challenges that models have nowadays?” and “what are the biggest challenges in modeling?”

FC: Early in my career, I realized that my modeling research efforts would be more worthwhile if the models I worked on were the U.S. operational models used by NCEP. I became one of the few university scientists who spent a sabbatical at NCEP (as well as many visits later on) and was fortunate to be able to make some major improvements to the precipitation forecasts in the NAM and GFS models. It was a wonderful experience working with the NCEP scientists and I encourage all NWP experts to spend time there at some point in their careers. The NWP models today still have room for improvement, which is a good thing since it means that forecasts are still going to get better in the future as we address the current challenges. Some of the major challenges include improving data assimilation (i.e., the way we use data from new observing systems such as dual-pol radar, current and new satellite sensors, pressure data from cell phones, etc.), improving the physics in the models, and how best to design an ensemble of convection-resolving models.

Some of the biggest challenges [in modeling] include: (1) Predictability—at both climate and convective time and space scales. That is, we need to know what the theoretical predictability of the climate system is to know how much room for improvement there is in our climate projections. We need to understand convective predictability to know how long a high-resolution forecast (e.g., 1 km) will successfully evolve convective phenomena. (2) Observations: We now have even operational models forecasting at resolutions much finer than the observational network that provides their initial conditions. We need measurements that, in general, provide higher time and space resolution than we have now, and also those that eliminate the gaps we have such as lower-tropospheric thermodynamic profiling. (3) Data Assimilation: This was mentioned above, and despite the sophistication of today’s assimilation methods, there are still many issues that need research, such as assimilation for convection-resolving models. (4) Physics: We still need to improve our representation of microphysical processes (which are rarely verified), boundary layer turbulence, and surface processes under different wind, stability and vegetation regimes, to name a few. (5) Coupled modeling: Climate models and also medium-range forecasting models need to be coupled with, e.g., ocean models, sea ice models, and land-surface and hydrologic models. So, lots of research opportunities for everyone!

Nadine Borduas asked AE and FC, “is it better to focus on one aspect of [atmospheric sciences] or do a little of all three?”

AE: Difficult question, but I think we need both types in science. We need people that dig into the nitty-gritty details but also the ones that perhaps have a bit more superficial knowledge but are able to connect different subfields. In the beginning of a career, I would say that a relatively narrow focus is better so that you really become a specialist in a topic, method etc. But thereafter, I think it’s good to branch out more and more. Many of the interesting new discoveries today occur in the intersections between different disciplines or sub-disciplines.

FC: If “all three” means physical, dynamic and synoptic (observational) meteorology, then I would recommend that all atmospheric scientists have some knowledge of all three. Dynamicists and modelers still need to know what the observed structure of the atmosphere is, while observationalists (and modelers) need to know how to physically interpret observed (or modelled) behavior of the atmosphere. However, in order to do original research, one must focus on just 1–2 sub-specialty topics within these broad areas in order to achieve the depth one needs to advance the science.

Nadine Borduas also sought advice on developing a research group from AE and FC, asking “when establishing a research group in [atmospheric sciences] how much field/lab/model work to you incorporate in your proposals?”

AE: I assume you mean how much of each one of these components I would incorporate in a proposal? I’m a modeler myself, but collaborate strongly with people doing lab or field work. In most proposals, I therefore either have an experimentalist as a co-applicant and/or refer to specific people that can provide the necessary complementary data and expertise.

FC: If one were to form a new atmospheric research group, it probably would have to concentrate primary on one of two areas: climate and/or mesoscale modeling, or research using new observational tools. It would be difficult to have major expertise in both areas unless you had a very large group (such as at a national laboratory). A few research groups might be formed to do theoretical studies or pure experimental labs (e.g., wind tunnels, fluids laboratory) but these are not as well-funded these days. For a modeling research group, no field or lab work would be needed in the proposals. If you are designing/testing new radars, UAS or thermodynamic profiling systems, considerable field work is required, as well as some laboratory work to refine the instruments. If you were concentrating on measurements from space, then you have no local field or lab work to do (except perhaps some ground-truth validation studies), and your efforts would be concentrated on data processing, data analysis, and product development. Thus the answer depends on the primary purpose of the research group.

RedLAtM wanted to know from FC, “what inspired the foundation of the COMET program?”

FC: The National Weather Service, as it was implementing the “modernization” effort in radars, satellites and workstations in the 1990’s, realized that most forecasters at the time were not well-trained in interpretation of the observations from these systems, nor in convective and mesoscale observations and dynamics, nor in new data analysis software. They wanted a training program much more rigorous than typically given by NWS training courses and thus asked UCAR for assistance. Since each of the NWS Forecast Offices (FO) had a new position known as a Science Operations Officer (SOO), the idea was to train the SOOs at COMET with graduate-level course material, and the SOOs would then be the training focal point at each of the 120 NWS FOs. The instructors for each SOO course would be both university professors and veteran NWS forecasters. I was one of the first COMET instructors of the SOO course, and it was one of the most intense and rewarding teaching experiences I have ever had.

On the role as president of EGU-AS (AE) or AMS (FC)

In regards to her role as President of EGU AS, RedLAtM asked AE, “from your point of view, what is the biggest challenge for women in Atmospheric Sciences? What we have to face in order to leader organizations like EGU AS? Do we have the same opportunities as men have?”

AE: I think that on paper, women and men have the same opportunities to do a career in atmospheric sciences. Still, it’s a fact that more women than men leave science after their PhD or Post-Doc, and you don’t see them as often in leadership positions as men. This problem needs to be fixed, but there is no single and easy solution. I think supporting networks and role models are important, we need to see as big diversity in this respect among women as among men.

RedLAtM also asked AE “what are the youth programs you have supported since you’ve been elected president of EGU AS?”

AE: During the last year we were looking for a new ECS representative for the AS Division and eventually Ali Hoshyaripour was elected for the position. But we had such a large number of really good candidates, so together with Ali, we decided to form an AS Division ECS group that together could come up with activities and help spread information relevant for the AS ECS community. I’m really happy about this initiative, and think it will be a great resource for the AS Division as a whole. In addition, I have of course participated in activities organized by the EGU for early career scientists to meet Division Presidents and other more senior people.

From FC, RedLAtM wanted to know “how we can set up an AMS student chapter in Latin-America?”

FC: I will keep this answer short. The following website explains exactly what is needed to start a Local or Student Chapter of the AMS: https://www.ametsoc.org/ams/index.cfm/about-ams/ams-local-chapters/how-to-start-or-reactivate-a-local-chapter/. I hope you will do so!

Oghenechovwen Christopher Oghenekevwe asked FC about Memoranda of Understanding. Specifically, “does an MoU exist between AMS and any Pan-African institution that allows undergraduate students Meteorology and Climate Science further their career with access to mentorship?”

FC: I don’t believe any MOU currently exists between the AMS and any African countries or organizations. The AMS does have agreements with the meteorological societies in Canada, Australia, India and China, and we will be glad to consider others. Since you mentioned students, you might investigate possible collaborations with the AMS Education Program (https://www.ametsoc.org/ams/index.cfm/education-careers/) and see if some of the teacher training programs could be adapted to student mentoring programs. The head of the Education Program is Wendy Abshire (abshire@ucar.edu) and I recommend that you contact her.

Finally, Chrysanctus Onyeanusi asked FC, “does this society have any scholarship scheme for young meteorology students like me?”

FC: The AMS has Freshman and Minority Scholarships for college freshman and sophomores, and Named Scholarships for seniors. Information about them is at https://www.ametsoc.org/ams/index.cfm/information-for/students/ams-scholarships-and-fellowships/. However, I note that one must be a U.S. citizen or have permanent resident status to be eligible, so if you are an international student, we do not yet have a program for such students. However, it seems to me that we should have one, so I will pass this suggestion to our Centennial Committee, which is thinking about new activities the AMS can engage in as we celebrate our 100th Anniversary in 2019.


AnnikaAnnica Ekman is the President of the Atmospheric Sciences Division (AS) of the European Geosciences Union (EGU) and a Professor of Atmospheric Sciences at Stockholm University, Sweden.

She obtained her PhD in Atmospheric Sciences from Stockholm University in 2001, and was a PostDoc at Massachusetts Institute of Technology (MIT) in Cambridge, Massachusetts. Afterward, she returned to Stockholm University as a research scientist, becoming an Associate Professor in Atmospheric Sciences in 2009 and a Full Professor in 2015. Ekman’s research interests include cloud–aerosol interactions and the various ways that aerosol particles influence weather, atmospheric circulation, and the climate system. She is also the editor of Tellus B and co-leads the research areas of aerosols, clouds, turbulence and climate in Bolin Centre for Climate Research, Stockholm.

Since her appointment as the President of EGU AS in 2015, she has strongly supported new ideas to stimulate inter- and cross-disciplinary collaborations.

 


FredAmerican Meteorological Society (AMS) President and Professor of Meteorology, Fred Carr, hails from Beverly, Massachusetts. Carr studied meteorology at Florida State University in Tallahassee, Florida, and was a PostDoc at SUNY-Albany in Albany, New York.

Since 1979, he has been a faculty member at the University of Oklahoma in Norman, Oklahoma. From 1996 to 2010, he was the Director of the School of Meteorology, playing a key role in the expansion of the School and the creation of international exchange programs. Alongside his activities at the University of Oklahoma, he helped found the COMET Program at the University Corporation for Atmospheric Research (UCAR) and has been involved in many professional committees. He recently completed two terms on the UCAR Board of Trustees and is co-Chair of the UCAR Community Modeling Advisory Committee for the National Centers for Environmental Prediction (NCEP). Carr’s research topics include synoptic-, tropical, and mesoscale meteorology; numerical weather prediction; data assimilation; and observational systems.

Throughout his career, Carr has contributed to the AMS in multiple ways, including: a longtime AMS member; Fellow of the AMS; member of AMS Council; Chair of the AMS Board on Higher Education; and editor of Monthly Weather Review, associate editor of Weather and Forecasting, and member of the Editorial Board for the Bulletin of the American Meteorological Society (BAMS).

When cooling causes heating

When cooling causes heating

 

 

 

 

 

 

 

 

 

 

 

 

 

Following the Montreal Protocol in the late 1980s, CFC (chlorofluorocarbon) were replaced by hydrofluorocarbons (HFC) as a refrigerant. Unfortunately, the HFC’s have a global warming potential (GWP) far greater than the well-known greenhouse gas (GHG), carbon dioxide. Apart from the fact that this was not known until the mid-1990’s, climate-change due to GHG was not an emergency back then.

Rapid and urgent actions had to be undertaken in order to decrease the risk of temperature rise due to GHG. The COP21 held in Paris last year and the ratification by a majority of countries (and GHG emitters) has been followed by a landmark deal to eliminate HFC’s. It was concluded mid-October in Kigali where 197 states were meeting on the occasion of the 28th Montreal Protocol meeting.

But, why do GHG emissions matters, anyway? GHG warm the Earth by absorbing energy and by reducing the amount that escapes in space. This is more commonly known as the greenhouse effect. Due to the naturally present GHGs, the Earth’s air temperature is on average around 15°C. However, the burning of fossil fuels as well as the emission of other GHG’s from human activities are increasing this effect and causing a significant increase in the temperature (see Figure 1).

Figure 1: Left: Global CO2 emissions (US DoE) and  Right: Global land and ocean temperature anomalies (NOAA)

Figure 1: Left: Global carbon emissions (US DoE) and Right: Global land and ocean temperature anomalies (NOAA)

Going back to the HFC’s, scientists have argued that with the current rate of installation of air conditioning systems, there will be an overall 1.6 Billion units installed by 2050. With their booming economy, developing countries, like China or India, have experienced in the recent years a dramatic increase in such installation.

The deal that has been struck last week will have developed countries gradually decrease their HFC emissions as from 2019 while developing countries will start to decrease their emissions by 2024 (2028 for others). It has been argued that these emissions would have been responsible of 0.5°C rise air temperature by the end of 2100 (Xu et al., 2013). This is a significant contribution to the 1.5°C target set by the COP21.

A conversation with Didier Hauglustaine from the LSCE, France, however, highlighted the fact that there is no ideal replacement. One of the most promising one is the HFO. It looks like a variety of solutions will have to be developed depending on their usage. Giving some time to companies and states to adapt and develop new technologies that could replace HFC is hence a necessity.

All of this finally raises the question of the impact of man and new technologies on the atmosphere. It seems that we have created yet another problem while trying to resorb the ozone hole with the replacement of CFC by HFC’s. More importantly, the increase in air temperature, rapid urbanization as well as the higher probability of heat waves in the future, calls for an increased understanding of the urban environment. Human comfort (indoor and outdoor) in these areas should be assessed carefully at the design stage in order to develop new urban paradigms that could limit the use of air conditioning units.

Why should we care about a building’s energy consumption?

Why should we care about a building’s energy consumption?

From the 9th to the 11th of September the Solar Energy and Buildings Physics laboratory is hosting the CISBAT conference. This international meeting is seen as a leading platform for interdisciplinary dialog in the field of sustainability in the built environment. More than 250 scientists and people from the industry will be at EPFL in Lausanne to talk about topics from solar nanotechnologies to the metabolism in urban districts.

But how is this related to Atmospheric Science really? As you might know, this year the United Nations Climate Change Conference COP21 meeting will be held in Paris and the focus is to try to limit the global temperature increase to 2ºC. Switzerland has already submitted its proposal regarding its future emission and aims at reducing it by 50% by 2030. Many other countries have done the same, but NGO’s are saying that the most important greenhouse gas (GHG) emitters are not going far enough in their proposals to limit the temperature increase.

So what are the sectors where we can decrease our energy consumption, improve the efficiency and decrease our GHG emissions? If we look at France for example, the building sectors consumes about 44% of the final energy use. From these 44%, around 70% is used only for the thermal comfort on the occupants.

conso

inbuildings Energy use for each sector (top) and inside buildings (bottom) adapted from French Environmental Energy Agency (Image credit: D. Mauree, 2014)

This conference is also an official presentation platform for the Swiss Competence Center for Energy Research “Future Energy Efficient Buildings & Districts” (FEEB&D). This project aims is to reduce the end energy demand of the Swiss building stock by a factor of five during the next decades thanks to efficient, intelligent and interlinked buildings.

From the figures above we have seen that there is here a huge potential to decrease our energy consumption. First we can improve the insulation of the buildings to enhance their efficiency. Several countries have implemented financial incentives to incite renovation but have also introduced tighter thermal regulations to decrease energy use in new and refurbished buildings.

In this conference, we will also be talking about the integration of renewable energies (RE) (solar PV, thermal, algae, wind, …). The idea of course is to improve the penetration of RE in urban areas so as to decrease our dependency on fossil fuels and hence of course reduce our GHG emissions. In order to reach this objective, it is then necessary to optimize their installations in order to see how we can reach the greatest autonomy possible with RE and the usage of storage solutions.

Among other subjects that will be presented during this meeting are model predictive control and daylighting and electric lighting. We will also address some issues related to urban simulation (you can have a look at a former blog post on this subject) / ecology and metabolism. Have a look at the CISBAT website and follow us on Twitter. I will also try to LT the conference with #cisbat15.

Urban Climate

Urban Climate

The 9th International Conference on Urban Climate and the 12th Urban Environment Symposium are taking place this week in the “Pink City” Toulouse. With the 21st Conference of Parties (COP21) which will be held in December in Paris, the obvious focus topic for the urban climate conference is the mitigation and adaptation to climate change in urban environment.

But, first of all, why should we even care about the urban climate and environment? One of the most important phenomena related to the urban climate that was first described by Luke Howard (1833) is the Urban Heat Island. This effect is caused by the accumulation of heat due to the various construction materials (asphalts, tiles, bricks…) used in urban areas. At night the urban areas hence cool less than in a natural environment and this leads to a higher temperature than the surrounding rural areas (see Figure). The difference in temperature can be up ot 8 degrees for some cities and for particular period during the year. Besides the presence of buildings also cause a modification of the wind pattern in cities.

One “simple” example of the significant impact of these effects is for example change the heating /cooling use in cities. Building energy demand is directly correlated with the outside air temperature but also to the wind speed. Thus, as the temperature is higher in urban areas, there is a greater need for cooling demand in temperate or arid climate. However, even in moderate climate, during long heat wave (and this is expected to become more frequent with climate change!) it can be expected that the cooling energy demand will increase.

Since 2010, over 50% of the world population lives in urban areas and this figure is expected to rise to 75% in 2050 (UN, 20121). As more people live in cities, this means that the cities need to grow to accommodate for the additional population. Urban expansion and densification as well as the decrease in agricultural land are crucial development issues that need to be addressed.

This means that we have to understand the various processes influencing the meteorological parameters in these areas. Model and simulation tools are ways to understand these complex problems and can be very useful tools for decision makers as it is an easier way to analyse different planning scenarios. Many scientists are also working on monitoring and measuring various meteorological parameters with traditional equipment but also with newer methods using mobile phones and other sensors.

The combined effects mentioned above can raise a number of questions:

  1. With climate change in mind, how do we build more buildings consuming less energy to accommodate for the increasing population?
  1. We have seen, for ex. in the summer 2003, that a long heat wave can increase significantly the number of deaths among vulnerable urban population (elder and younger people as well as people with respiratory problems). How do we then make sure that the thermal comfort of the inhabitants is satisfied in cities?
  1. Finally, how do we make sure that we build, design and plan more sustainable cities to decrease the impact of air pollution, to integrate more green and vegetated spaces…

All of these questions are very difficult to answer as they are a combination of scientific research questions but also of policy and planning decision. Scientists and planners should hence work together to build more sustainable cities and to provide meaningful implementation of the different research solutions.

1UN. World Urbanization Prospects: The 2011 Revision, CD-ROM Edition. Technical report, Department of Economic and Social A_airs, Population Division, 2012.

An unlikely choice between a gasoline or diesel car…

An unlikely choice between a gasoline or diesel car…

I have recently been confronted with the choice of buying a “new” car and this has proved to be a very tedious task with all the diversity of car that exists on the market today. However, one of my primary concerns was, of course, to find the least polluting car based on my usage (roughly 15000km/year).

Cars (or I should say motor vehicles) pollution is one of the major sources of air pollution (particulate matter, soot, NOx, …) in urban areas. These often cause, during both winter and summer seasons, long and prolonged exposition to ozone or PMs which can have significant effect on the health of urban population. Besides, vehicles are also one of the most important sources of greenhouse gases emissions (around 30%). Extensive research in various areas (air pollution and monitoring in urban areas, efficiency of motor vehicles, mobility and public transportation, urban planning,…) are thus being conducted to help reduce the exposition to dangerous pollutants and emissions of GHG.

Manufacturers have been more and more constrained by new regulations to decrease the pollutant emissions (with EURO6 norm now in the EU) and the increase the efficiency of motor vehicles. Governments around the world and more particularly in Europe, after the financial crisis of 2007/2008 have introduced new subsidies to incite people to buy new more energy efficient vehicles. One of the main issues here is that often the more efficient vehicles are not necessarily the less polluting vehicles. Policies have been based on GHG emissions from vehicle consumption without consideration of the full life cycle cost and analysis and also on other pollutants emissions.

Thus if we take for example an electric car, the GHG emissions (and also other pollutants) are pretty low or close to zero as there are none released by the car itself. But we also need to evaluate the emissions from the electricity power plant (most likely to be a centralized one based on either fossil fuel or nuclear energy). Furthermore if the life cycle cost of the battery in such cars, are taken into consideration, the picture is not so black and white anymore as it has been pointed out by numerous studies (ADEME – sorry for the French link!). Besides electric vehicle remain quite expensive and not really adapted to all usage.

If we compare both diesel and gasoline cars, then it becomes a bit more tedious. Diesel engines consumes generally less than gasoline one. However their PM emissions, for example, can be quite high and hence they need really efficient filters to get rid of these pollutants. More stringent regulations have forced manufacturers to improve significantly the quality of the air coming out of their diesel engines but still remain on average more polluting than gasoline cars. Countries, like France, that have strongly subsidized the use of diesel in the past, are now finding it quite difficult to phase out these types of cars. And besides they are more efficient and hence emits lets GHG.

Coming back to my choice of cars then… The choice for me in the end was then between the long term or short term benefits. Using a gasoline car or an electric car (in a country where the energy is coming from renewables!) would be more ecologically sound if we drive mostly in urban areas. However if you are thinking about the long term benefits (with climate change) then you should probably opt for a more efficient diesel car.

All of this, points out that research still need to be conducted and new innovative ideas are really needed (like Elon Musk’s battery, maybe?) so as to bridge the enormous gap between having an efficient car, the life cycle analysis and living in a pollution-free urban environment. But of course…, the best solution is to use public transport or bikes… well this is not always possible!