What we probably missed the most last year was the possibility of networking with colleagues and friends, and this is especially true for Early Career Scientists (ECS). They would particularly benefit not only of scientific sessions and presenting their own work during conferences and schools but also of having networking possibilities with other ECS or established and senior researchers.

For this reason on **Thursday 22 April 18:00 CEST** we are going to organize our NP ECS-event (currently scheduled as NET35). It will be an online networking event with the aim to meet new people, to exchange information about for example career paths, challenges faced in our field or tips to work in times of restricted travel options. It was thought to bring together both ECS and senior scientists in order to be as inclusive and fruitful as possible. Moreover, NP will also host its Division-wide Networking event on **Wednesday 28 April 17:30 CEST** (currently scheduled as NET6:NP-event for all division members).

Both events will be organized on Gather.town, an interesting platform that will allow us to freely move in a virtual space and interact with people. People will find a virtual environment consisting of a lobby, a main hall, six discussion rooms, an auditorium, a lounge, and a garden. Give a look to the pictures below for having an idea of what you can expect!

We really hope to see all of you enjoying together these (unfortunately virtual) networking events!

**Outside the virtual space**

**Inside the virtual space**

**A private space for discussions**

**The lounge**

Understanding the transport of tracers and particulates is an important topic in oceanography and in fluid dynamics in general. The trajectory of an individual fluid parcel will in many cases strongly depend on its initial condition, i.e. the flow is chaotic. At the same time, on a more macroscopic level, many flows possess some form of structure that is less sensitive to the initial conditions of the individual parcels. This structure is determined by the collective behaviour of groups of parcels for intermediate or long times.

An example for such macroscopic structure in geophysical flows are eddies. In the ocean, mesoscale eddies (at the order of 10-100 km) are well-known for capturing water masses while being transported by a background flow. For describing the pathway of a fluid parcel that is captured in an eddy, what really matters is the motion of the entire eddy in the background flow, and not so much where exactly that parcel is within the eddy. We can simplify the problem by saying that all parcels in the eddy approximately go the same pathway, i.e. the particles stay approximately coherent over a certain time interval. Such sets of fluid parcels (or fluid volume) have therefore been termed “finite-time coherent sets” or “Lagrangian coherent structures” in the fluid dynamics community.

In our article, we explore a density-based clustering technique, the so called OPTICS algorithm (Ordering Points To Identify the Clustering Structure) published by Ankerst et. al in 1999, for the detection of such finite-time coherent sets. The goal of density-based clustering is simple: find groups of points that are densely distributed, i.e. those points that are all close to each other. We take modelled trajectories of fluid parcels and represent them as points in a high dimensional Euclidean space. In this way, two points in that space that are very close in terms of their Euclidean distance correspond to parcels that stay close to each other along their entire trajectory. Once this is done, OPTICS does the rest. In the form we propose, the method does not need any sophisticated pre-processing of the trajectory data. What’s also nice about OPTICS is that it is available in the scikit-learn library of Python, so it is quite straightforward to use.

What OPTICS does is that it takes the data and creates a reachability plot. This is a quite condensed visualization of how similar fluid trajectories are – condensed because it is a one-dimensional graph defined on the trajectories. OPTICS creates an ordered list of the trajectories in such a way that densely populated regions are close to each other in this list. Finite-time coherent sets can then simply be identified by examining the “topography” of this plot, i.e. at the troughs and crests. An example for a reachability plot for a model flow containing an atmospheric jet and vortices, the Bickley Jet model flow, can be seen in the first column of the figure above. One can obtain clustering results by thresholding the reachability value (the y-axis of that plot) at a specific value , and then identify connected regions below the line as a coherent set. This method is also known as DBSCAN clustering, but what is special about OPTICS is that multiple DBSCAN clustering results (i.e. for different horizontal lines) can be obtained from one reachability plot.

Two things are special about OPTICS that make it specifically usable for the situations in fluid dynamics. First, it has an intrinsic notion of coherence hierarchies. We can see this by looking at the different rows in the figure, where the clustering result for different choices of are shown. For a large (first row in the figure), we really only see the very large-scale structure of the jet separating the northern and southern parts of the fluid. Decreasing is then similar to using a magnifying glass: if we look closer, we identify smaller individual eddies in the northern and southern part of the flow. The second useful property of OPTICS is that not every point has to be part of a cluster. In fact, in the second and third rows of the figure, the grey points are identified as noise, i.e. they do not belong to any coherent set. This is different from many recent approaches that rely on graph partitioning algorithms for cluster detection. There, every point has to be part of a coherent set, which strongly limits the applicability to realistic geophysical flows. In our article, we apply OPTICS also to modelled trajectories in the Agulhas region, and find, as expected, Agulhas rings.

We show in our paper that a 20-year-old algorithm can be very successful in detecting finite-time coherent sets, even in a purely data-driven form, i.e. with very little additional heuristics or pre-processing of the data. It might well be that there exist even better algorithms that are suited for research questions in fluid dynamics. The Lagrangian fluid dynamics community should therefore explore more existing methods and algorithms from data sciences as these have the potential to greatly improve our understanding of fluid flows.

In geophysics, forecasting is based on solving the equations of physics with the help of a computer. To calculate a forecast we need an initial condition. Estimating this condition is difficult, however, because in general the observations available are few or heterogeneously distributed in space and time. To achieve this, a reference algorithm is given by the Kalman filter. This algorithm is based on temporal propagation during the prediction stage; and the updating of the prediction and analysis error covariance matrices during the analysis stage.

While the formalism of the Kalman filter is based on simple formulas of linear algebra, its practical implementation faces two pitfalls :

Indeed, in the systems of interest to us, these matrices are very large and it is impossible to compute their temporal propagation during forecasting. The ensemble Kalman filter, by approximating the covariance matrices by an ensemble estimation, offers a way to propagate the covariances from the forecast of each member of the ensemble.

Secondly, while part of the forecast error statistics can be explained by the propagation of initial uncertainties – the predictability error – , another contribution, linked to the defect in the numerical model – the model error – is much more difficult to characterize. To go further in understanding model error, the working hypothesis of a decorrelation between the predictability error and the model error is often introduced, which then leads to decompose the prediction error covariance matrix as the sum of the predictability error and model error covariance matrices. Despite this assumption it is very difficult to characterize the model error statistics, and moreover the predictability error is never decorrelated from the model error.

The objective of the paper is to characterize the model-error covariances related to the discretization of physics equations, i.e. the errors that emerge during the transition from the mathematical formalism of physics equations to their implementation on a computer and then to their numerical resolution. To achieve this, we relied on a new method: the parametric Kalman filter (PKF). The PKF is an implementation of the Kalman filter, in which covariance matrix are approximated by a covariance model characterized by a set of parameters (Pannekoucke et al. , 2016; Pannekoucke et al. 2018). Thus, by describing how the parameters evolve over time, we can describe, in an approximate way, the evolution of the full covariance matrix.

By revisiting the formalism of the model-error covariance matrix and using the PKF, we have characterised one of the observed defects in the prediction of the chemical composition of the atmosphere where it appeared that the variances of the forecast error estimated by an ensemble decrease abnormally with time, a phenomenon called the loss of variance. We have shown that this loss of variance is related to a diffusive effect – the origin of which appears when determining the modified equation which is the differential equation whose solution is the numerical prediction – as shown in Figure 1. The panel (a) shows the real evolution of the concentration of a chemical compound transported by a heterogeneous wind and (b) the numerical prediction calculated from a simple numerical scheme: it is observed that the intensity of the numerical solution decreases abnormally compared to the theoretical solution. With the PKF equations, we have characterized the evolution of the model error variance (panel c), which we have shown to be coupled with the evolution of the anisotropy of the correlation functions (characterized by the significant correlation scale, panel d). This is the first time that we have been able to characterize the properties of the model error covariance matrix linked to the defects of the numerical resolution scheme.

This article has not only enabled us to understand a behaviour observed in practice (here the loss of variance), but above all it has opened up a new theoretical avenue of exploration for the characterization of model error covariances.

**References**

O. Pannekoucke, S. Ricci, S. Barthelemy, R. Ménard, and O. Thual, “Parametric Kalman Filter for chemical transport model,” Tellus, vol. 68, p. 31547, 2016, doi: 10.3402/tellusa.v68.31547.

O. Pannekoucke, M. Bocquet, and R. Ménard, “Parametric covariance dynamics for the nonlinear diffusive Burgers’ equation,” Nonlinear Processes in Geophysics, vol. 2018, pp. 1–21, 2018, doi: https://doi.org/10.5194/npg-2018-10.

]]>The reality is that climate science can answer such criticisms and explain why, in a context of established global warming, cold waves can still be observed. Before explaining the mechanisms that still trigger snowy and cold events in warmer climates, it is important to understand that scientists agree on the fact that these events will become less frequent in the future than they are now. A few recent studies shown that we can already detect a strong signal in the decrease of average snowfall in most of Europe as well as an increase of temperature minima. Furthermore, despite cold spells that could affect large regions of the northern hemisphere during the boreal winter, the Jan 2021 monthly global temperature average anomaly with respect to the 1981-2010 period is positive over the majority of the globe. A new study suggests that the fingerprint of climate change can be detected from any single day in the observed global record since early 2012. Yet there exist at least four reasons that can still produce cold waves at mid-latitudes.

1. The variability and chaoticity of atmospheric motions can entrain cold air from polar latitudes to mid-latitudes via the disruption of the so-called polar vortex. The polar vortex is a compact area of low pressure and cold temperatures located at Northern latitudes. It is particularly strong in the Boreal winter because of the large temperature gradients from the North pole to the equator in that season: the pole receives little radiation in winter because of the darkness. Chaotic fluctuations in atmospheric motions can break the vortex and bring polar air at low latitudes as it just happened in winter 2020/2021.

2. Polar night will forever exist and so the polar vortex will be capable of advecting relatively cold air from northern to southern latitudes even in warmer future climates.

3. Climate change may have secondary effects on the polar vortex, for example inducing a more wavy behaviour which would favour cold waves at Southern latitudes. This hypothesis is highly debated in the climate science community and no consensus has been formed so far.

4. Ground to mid-troposphere temperature gradients may enhance snowfall precipitation especially on the proximity of warmer seas. This could happen in the future if land and sea warm-up faster than the atmosphere and this effect might be locally important where seas are warming faster than the rest of the globe as for example in the Mediterranean.

Besides the previous points one has to keep in mind that any statements made on climate change using a single event is meaningless as climate is constructed as a statistics of multiple events: one swallow doesn’t make a summer.

]]>Upwelling regions are being affected by climate change in several ways. One of the most important is the change in the wind regime that controls the upwelling dynamics. Another relevant effect is the modification of phytoplankton communities, which constitute the base of the food chain that sustains higher trophic levels including zooplankton and fish stocks. To find out how these consequences of climate change could affect the oxygen concentrations in the Iberian Peninsula Upwelling System, we performed numerical simulations to study the response of oxygen concentration levels to changes in wind patterns and phytoplankton species.

In coastal upwelling systems, winds blowing equatorward cause surface waters to move offshore, being replaced by colder, nutrient rich deep waters. These nutrient rich waters promote the growth of phytoplankton that supports higher trophic levels. When cold waters reach the surface, a temperature front is formed a few tenths of kilometers off the coast. This front is unstable and wind fluctuations quickly cause the formation of vortices and filaments that can extend hundreds of kilometers offshore. Our study focused on this dynamical environment, where three elements control the oxygen concentrations: the atmosphere, the biological activity and the internal upwelling dynamics. The air-sea oxygen fluxes are determined using the wind speed and the interface oxygen concentrations; the living organisms can produce oxygen through photosynthesis and consume it by respiration; the upwelling dynamics can either bring waters depleted/enriched in oxygen from the subsurface or transport offshore oxygen rich/poor waters through the frontal instabilities (Figure 1a). It is the balance between all these factors that ultimately determines the oxygen concentration levels (Figure 1b).

We developed a coupled physical-biogeochemical model that accounts for all these elements and used it in simulations of an idealized Iberian Peninsula Upwelling System where upwelling winds and phytoplankton growth rate were modified, reflecting observed and predicted changes of these two factors. Our results (Figure 1b) suggest that oxygen levels in the Iberian Peninsula Upwelling would decrease with wind blowing for long periods of time, because the sustained upwelling dynamics would carry oxygen rich waters offshore, or with phytoplankton communities dominated by species growing slowly, since the photosynthetic production would be lower.

As of now, the Iberian Peninsula Upwelling System is relatively well oxygenated, but our results demonstrate that the expected trends in wind regime and phytoplankton community modification may be, if confirmed, damaging for the coastal ecosystem of the Iberian Peninsula and for the socio-economic activities that depend on it.

Unless differently specified, all seminars will be hosted via the Zoom platform at 2:30 pm Central European Time.

Access details are provided upon free registration here.

Website: https://sites.google.com/view/perspectivesonclimate/home-page.

**Programme:**

**20.01.2021 → Brian Hoskins (Imperial)**: “Potential Vorticity”

**27.01.2021 → Klaus Hasselmann (MPI-M)**: “Klaus Hasselmann’s perspectives on climate: an interview”

**03.02.2021 → Susan Solomon (MIT)**: “The Scientific And Policy Challenges Of The Antarctic Ozone Hole: A Global Success Story”

**10.02.2021 → Kerry Emanuel (MIT)**: “History of the Scientific Understanding of Hurricanes”

**24.02.2021 → Eugenia Kalnay (UMD)**: “It’s not just Climate Change”

**03.03.2021 → David Ruelle (IHES)**: “Chaos Theory: The Multidisciplinary Origins”

17.03.2021 → Pascale Braconnot (LSCE-IPSL): “Paleoclimate modeling to test climate feedbacks and variability”

**24.03.2021 → Berengere Dubrulle (LSCE-IPSL)**: “On the concept of energy cascades in turbulence: from Richardson/Kolmogorov picture to multifractal and beyond”

**31.03.2021 → Giovanni Jona-Lasinio (Sapienza)**: TBA

Everyone welcome to attend and we hope this kind of activity could be of benefit for all of you!

The organizing committee

Tommaso Alberti (INAF-IAPS, Italy)

Lesley De Cruz (RMI, Belgium)

Christian Franzke (Jacobs University, Bremen, Germany)

Vera Melinda Galfi (Uppsala University, Sweden)

Valerio Lembo (ISAC-CNR, Italy)

Ensemble Forecasting rose with the understanding of the limited predictability of weather. In a perfect ensemble system, the obtained ensemble of forecasts expresses the distribution of possible weather scenarios to be expected. However, operational forecasts of near-surface weather elements are often underdispersive and underestimate forecast errors.

At Deutscher Wetterdienst (DWD) a MOS system has been developed, that corrects for systematic errors of the numerical ensemble systems ECMWF-ENS and COSMO-D2-EPS. It calibrates probability forecasts to observed relative frequencies with a focus on severe weather. The calibrated event probabilities can be used for qualified decisions in terms of cost-loss evaluations that relate forecasts of harmful weather to economic value.

The basic concept of the MOS system presented in the paper is to use ensemble mean und spread as predictors in multiple linear and logistic regressions. The use of ensemble products as predictors instead of processing each ensemble member individually prevents difficulties with underdispersive statistical results and underestimated errors especially for longer forecast horizons. During multiple regressions, the system selects most relevant predictors based on statistical tests and is able to correct even for conditional biases, therefore. It is possible to use the latest available observations as predictors for short-term forecasts or to use the previous statistical forecasts as predictor for the next time step in order to exploit the persistency of the weather.

Extreme events are most relevant for meteorological warnings. They are (fortunately) rare, however, and long time series are required to capture a sufficiently large number of observed events in order to derive statistically significant estimations. Currently, time series of 8 years of ensemble and observation data are used for training. Model changes have been found less harmful than insufficient data. Extreme events with 40 mm or 50 mm precipitation per hour rarely appear at certain stations, which may result in statistical models that permanently predict 0% probability. Clusters of stations are gathered for a combined modelling of rare events, therefore. The clusters are defined according to the climatology of the stations. In order to compute statistical forecasts on a regular grid, the MOS-equations of the clusters are evaluated for locations apart from the training and observation sites. Figure 1 shows resulting gridded forecasts of wind gust probabilities as an example. Strong gusts appear most probably over sea and at higher elevations according to this forecast.

]]>Dylan is a postdoctoral fellow within the Oceans and Atmosphere business unit of CSIRO (Australia). His current research focuses on methods for learning reduced-order models from data, and their applications in studying causal relationships in the climate system.

Terry is leader of the climate forecasting team at CSIRO. Along with Adam Scaife (UK Met Office), he is a current co-chair of the WCRP Grand Challenge in Near Term Climate Prediction. His current interests and research are in coupled data assimilation and ensemble prediction, climate dynamics and causality, and application of statistical dynamics to geophysical fluids.

A familiar challenge in climate science is the need to extract information from very high-dimensional datasets. To do so, the first step is usually the application of a method to reduce the dimension of the data down to a much smaller number of features – that is, combinations of the original variables – that are more amenable to study. The importance of identifying a small set of features that best capture the salient information in the data was recognized early on by Lorenz, among others, whose work on the use of so-called empirical orthogonal functions (EOFs) in statistical weather prediction provided the impetus for widespread adoption of the technique among meteorologists and climate scientists. Nowadays, EOF analysis is one of the most frequently used exploratory tools in the climate scientist’s toolbox.

In the intervening years since Lorenz considered the problem, an extensive literature has developed on a wide range of dimension reduction methods where typically some additional pre-filtering of the data is applied before targeting features relevant to the chosen spatio-temporal scales. Examples from this diverse set of methods include vector quantization, based on clustering methods such as k-means, which encodes a given datapoint by assigning it the label of the closest member in a set of a small number of prototypical observations. EOFs represent the data in terms of linear combinations of orthogonal basis vectors, while archetypal analysis (AA) uses basis vectors chosen to lie on the convex hull – the observed “extremes” – of the data. Although conceptually all of these methods are quite different, they may all be formulated in terms of finding a factorization of the observed design matrix into lower rank factors that optimizes a particular objective function, subject to different constraints on the optimal factors.

An important consideration in choosing among different such matrix factorizations is how meaningful the resulting representation will be in context; for instance, can the results be directly mapped to distinct physical modes of variability? In “Applications of matrix factorization methods to climate data”, we highlight this through a set of case studies in which the relevant features manifest in dramatically different ways, meaning that certain methods tend to be more useful than others. In sea surface temperature (SST) data, key modes such as El Nino correspond to large temperature anomalies. As a result, describing an SST map in terms of a basis of extreme points, as provided by AA or by related convex codings, is effective in extracting recognizable physical modes. This is not the case when the features of interest do not lie on the boundaries of the observed data, as in the example of quasi-stationary weather patterns. Since these structures are characterized by their recurrent and persistent dynamics, vector quantization methods are more easily interpreted. The accompanying figure shows a four-dimensional basis that results from using k-means, AA (second column), and two different convex codings (third and fourth columns) on Northern Hemisphere geopotential height anomalies. Where a prototypical blocking pattern is clearly evident (cluster 3) in the k-means basis, only by imposing some level of regularization (as in the fourth column) do the methods based on convex encodings yield a similarly direct identification of blocking events.

With the development of many alternative dimension reduction techniques, the climate scientist’s toolbox is increasingly well equipped for extracting and summarizing complex structural information from large, high-dimensional datasets. We emphasize that it is essential when selecting a method to take into account the nature of the representation that results, and how well this aligns with the features of interest. Or, in other words, to choose the right tool for the job.

]]>This is a hard scientific challenge which requires state-of-the-art climate models, capable of resolving the convective phenomena which are at the heart of the storms. Unfortunately current global climate models have a horizontal resolution of 50 to 100 km, meaning that hurricanes and tropical storms are not adequately resolved. Given the nonlinear nature of hurricanes, parameterizations struggle to reproduce intense storms and cause a large underestimation of their winds and precipitation. A workaround consists in performing targeted high resolution simulations using regional climate models. However, while they are adequate to reproduce the atmospheric physics of hurricanes, they are blind to planetary phenomena and to the ocean dynamics, whose role in the development of hurricanes is fundamental. Indeed tropical cyclones are influenced by non-linear oscillations in the dynamics of the ocean, such as El Niño.

Although the confidence in future climate change for hurricanes is lower than other phenomena, new pathways of research may help to improve our understanding of tropical cyclones. The mismatch between global and climate models could be addressed using machine learning phenomena capable of learning scaling relationships for convective phenomena and improving parameterizations. Another pathway is to reconstruct in detail past storms to improve the statistics of observed events. This is a particularly hard challenge which requires some climate archeology (also called paleotempestology) given the difficulty of detecting the storms in the pre-satellite era.

]]>From a scientific perspective, studying extreme weather events is a multifaceted endeavour. First of all, extremes are by definition rare, meaning that our understanding of extremes relies on a relatively small number of observations. There is no easy solution to this, as no matter how much data we have, extremes will always only make up a small part of it… otherwise they would not be extremes to start with! This in turn makes it difficult to predict how the occurrence and characteristics of extremes may change in the future. From a practical point of view, the impacts of extreme weather events are multifarious and may only be understood in a multidisciplinary perspective. For example, a heatwave may have major impacts on public health (e.g. death of people already in precarious health conditions), on vegetation (e.g. browning or die-back), on livestock (e.g. lowered milk production, lower birth rates, etc.), on energy production (e.g. difficulties in cooling power plants) and on industrial output (e.g. factory closures due to excessive temperatures). No single discipline can incorporate all of these aspects. A better understanding of extreme weather and its impacts can therefore only be achieved through a multidisciplinary, collaborative effort.

The European research community has risen to such challenge thanks to centralised European Union research funding agencies under the European Commission, such as the Research Executive Agency and the European Research Council. The support from these funding agencies provides us with opportunities to organise large international consortia and see high-risk, high-gain projects come to fruition. One such consortium, recently received funding to conduct research on extreme weather events in Europe. The *european weather Extremes: DrIvers, Predictability and Impacts* (EDIPI) consortium is comprised of nine universities/research centres and 11 partner organisations, ranging from insurance companies to operational forecasting centres. EDIPI is funded under the Marie Skłodowska-Curie programme and will hire 14 Ph.D. students across several countries, starting from 2021. The project aims to both advance our scientific understanding of extreme weather and train a cohort of young researchers, who may ensure continued efforts in this field in the upcoming years.

The project mainly addresses temperature, precipitation and surface wind extremes over Europe and the Mediterranean, and will tackle three overarching scientific questions: 1) Why does a specific type of weather extreme occur? 2) How can we use this knowledge to better predict when it will occur? And, 3) what are the likely impacts once it does occur? To answer these critical questions, EDIPI will combine climate science, statistical mechanics, dynamical systems theory, risk management, agricultural science, epidemiology and more. One of the Ph.D. projects seeks to leverage concepts from statistical mechanics to simulate a large number of winter storms (such as the one shown in Fig. 1) in a computationally efficient way with a numerical climate model. Another project will use data from social networks to forecast temperature-attributable mortality. A third project will use numerical models from the insurance sector to better quantify future losses from European windstorms. On the training front, EDIPI will ensure an active exchange between the academic, public and private sectors. A key aim of the project is to prepare the Ph.D. students to a broad range of career options in the field of weather extremes, both within and beyond academia.

EDIPI will be advertising doctoral positions in the early 2021. More information on the individual Ph.D. projects, participating institutions and supervisors can be found at www.edipi-itn.eu.

**This post is a co-blog by the EGU divisions of Atmospheric Science, Climate: Past, Present & Future, and Nonlinear Processes in Geosciences ****and has been edited by the editorial boards.**

]]>References:[1] D. Guha-Sapir (2020). Human Cost of Disasters 2000-2019 - Key Insights. Accessed 20/10/2020 at: https://cred.be/sites/default/files/WorldDisasterDayOct12presentationDebby-2.pdf [2] Rahmstorf S, Coumou D. 2011. Increase of extreme events in a warming world. Proc. Nat. Acad. Sci., 108(44), 17905-17909.