Abhirup is pursuing a doctoral degree in Theoretical Physics at University of Potsdam. He is working at Potsdam Institute of Climate Impact Research as a guest researcher as part of the DFG funded NatRiskChange project. In this project, he uses recurrence analysis to compare the recurrence properties of different potential drivers underlying the temporal changes of flood hazards. He is working on further methodological developments of this technique to extend its capabilities to study event-like data (extreme events), data with uncertainties, and spatio-temporal recurrences. Abhirup has a Master’s degree in Physics from Bharathidasan University, Trichy, India.

Extreme events attract considerable attention in the scientific community across different disciplines due to their significant impact on economy and human lives. Floods are examples of natural extreme events that cause substantial loss of economic assets and lives. Although extreme events seem stochastic, they manifest a complex dynamical system and often have an inherent recurring behavior.

When studying such complex systems, linear methods seldom capture the whole picture as the system itself is nonlinear. Recurrence plot analysis is a robust nonlinear framework that helps us to visualize and quantify a system’s underlying dynamics.

The underlying dynamics of a system can be qualitatively evaluated by looking at the pattern of a recurrence plot. For a quantitative understanding, the line structure of a recurrence plot is used.

The conventional recurrence plot analysis is applying Euclidean or any other norm in the system’s phase space to identify recurrences. However, the standard recurrence plot analysis is not suitable for analyzing extreme event-like data because the rarity of such events leads to significant gaps in the data. For such data, the edit distance method is suitable to identify the recurrences in extreme event-like data. The edit distance method was originally proposed as a metric to study the similar patterns between spike events, in the context of neuroscience. In this method, the time series, or the series of events, is divided into small time segments. Then, the similarity between a pair of time segments is calculated using three elementary operations – shifting in time, deletion, and insertion of events.

In this new study, the existing edit distance method is improved for extreme event-like data by introducing a nonlinear function that incorporates a temporal tolerance to deal with the quasi-periodic nature of real-world extreme events.

The proposed modified measure is demonstrated on prototypical examples that mimic certain behaviors of extreme natural events. Finally, it is applied to study flood events of the the Mississippi river and revealed a significant serial dependency of flood events. Such a finding suggests some critical implications, like the quasi-periodic occurrence of flood events due to the nonlinear interplay between its drivers.

Several studies are confirming that anthropogenic emissions result in a significant increase in the loss of ice from global Ice Sheets, although we cannot link each single iceberg formation to climate change. “Periodic calving of large chunks of ice shelves is part of a natural cycle, and breaking off of A-76 is not linked to climate change”, Ted Scambos, a research glaciologist at the University of Colorado in Boulder, said, according to Reuters. “Because the ice was already floating in the sea before dislodging from the coast, its break-away does not raise ocean levels” continues Scambos.

The reason why the single Iceberg formation cannot be attributed to climate change is probably due to the difficulty of studying polar regions: the monitoring and tracking of polar icebergs is possible only thanks to the recent development of satellites. In the last 40 years we could detect only about 5 icebergs of the same size of A-76. The variability of the polar weather makes it difficult to discern trends in iceberg dynamics from fluctuations due to natural phenomena.

Nonetheless worrying results come from the scientific community: mathematicians at the University of Sydney have shown that icebergs are melting faster than current models describe. Their findings are based on a new model to more accurately represent the melt speed of icebergs into oceans. Lead author and PhD student Eric Hester said: “While icebergs are only one part of the global climate system, our improved model provides us with a dial that we can tune to better capture the reality of Earth’s changing climate. In icebergs moving in oceans, the melting on the base can be up to 30 percent faster than in old models.”

A misrepresentation of some elements of the cryosphere, such as the icebergs, could hide tipping elements of the climate system: a tipping point in the climate system is a threshold that, when exceeded, can lead to great changes in the state of the climate. Two global tipping elements concerning the cryosphere, namely the irreversible melting of the ice caps of Greenland and Antarctica. In Greenland, a positive feedback cycle exists between melt and surface elevation. At lower altitudes, temperatures are higher, resulting in additional melting. This feedback loop could become strong enough to cause irreversible melting. The instability of the pack ice could trigger a tipping point in West Antarctica. Either way, this would accelerate the rise in sea levels. Due to their importance for the climate system and their effect on ecosystems and society, international research projects such as TIPES are trying to understand climate tipping points

]]>What we probably missed the most last year was the possibility of networking with colleagues and friends, and this is especially true for Early Career Scientists (ECS). They would particularly benefit not only of scientific sessions and presenting their own work during conferences and schools but also of having networking possibilities with other ECS or established and senior researchers.

For this reason on **Thursday 22 April 18:00 CEST** we are going to organize our NP ECS-event (currently scheduled as NET35). It will be an online networking event with the aim to meet new people, to exchange information about for example career paths, challenges faced in our field or tips to work in times of restricted travel options. It was thought to bring together both ECS and senior scientists in order to be as inclusive and fruitful as possible. Moreover, NP will also host its Division-wide Networking event on **Wednesday 28 April 17:30 CEST** (currently scheduled as NET6:NP-event for all division members).

Both events will be organized on Gather.town, an interesting platform that will allow us to freely move in a virtual space and interact with people. People will find a virtual environment consisting of a lobby, a main hall, six discussion rooms, an auditorium, a lounge, and a garden. Give a look to the pictures below for having an idea of what you can expect!

We really hope to see all of you enjoying together these (unfortunately virtual) networking events!

**Outside the virtual space**

**Inside the virtual space**

**A private space for discussions**

**The lounge**

Understanding the transport of tracers and particulates is an important topic in oceanography and in fluid dynamics in general. The trajectory of an individual fluid parcel will in many cases strongly depend on its initial condition, i.e. the flow is chaotic. At the same time, on a more macroscopic level, many flows possess some form of structure that is less sensitive to the initial conditions of the individual parcels. This structure is determined by the collective behaviour of groups of parcels for intermediate or long times.

An example for such macroscopic structure in geophysical flows are eddies. In the ocean, mesoscale eddies (at the order of 10-100 km) are well-known for capturing water masses while being transported by a background flow. For describing the pathway of a fluid parcel that is captured in an eddy, what really matters is the motion of the entire eddy in the background flow, and not so much where exactly that parcel is within the eddy. We can simplify the problem by saying that all parcels in the eddy approximately go the same pathway, i.e. the particles stay approximately coherent over a certain time interval. Such sets of fluid parcels (or fluid volume) have therefore been termed “finite-time coherent sets” or “Lagrangian coherent structures” in the fluid dynamics community.

In our article, we explore a density-based clustering technique, the so called OPTICS algorithm (Ordering Points To Identify the Clustering Structure) published by Ankerst et. al in 1999, for the detection of such finite-time coherent sets. The goal of density-based clustering is simple: find groups of points that are densely distributed, i.e. those points that are all close to each other. We take modelled trajectories of fluid parcels and represent them as points in a high dimensional Euclidean space. In this way, two points in that space that are very close in terms of their Euclidean distance correspond to parcels that stay close to each other along their entire trajectory. Once this is done, OPTICS does the rest. In the form we propose, the method does not need any sophisticated pre-processing of the trajectory data. What’s also nice about OPTICS is that it is available in the scikit-learn library of Python, so it is quite straightforward to use.

What OPTICS does is that it takes the data and creates a reachability plot. This is a quite condensed visualization of how similar fluid trajectories are – condensed because it is a one-dimensional graph defined on the trajectories. OPTICS creates an ordered list of the trajectories in such a way that densely populated regions are close to each other in this list. Finite-time coherent sets can then simply be identified by examining the “topography” of this plot, i.e. at the troughs and crests. An example for a reachability plot for a model flow containing an atmospheric jet and vortices, the Bickley Jet model flow, can be seen in the first column of the figure above. One can obtain clustering results by thresholding the reachability value (the y-axis of that plot) at a specific value , and then identify connected regions below the line as a coherent set. This method is also known as DBSCAN clustering, but what is special about OPTICS is that multiple DBSCAN clustering results (i.e. for different horizontal lines) can be obtained from one reachability plot.

Two things are special about OPTICS that make it specifically usable for the situations in fluid dynamics. First, it has an intrinsic notion of coherence hierarchies. We can see this by looking at the different rows in the figure, where the clustering result for different choices of are shown. For a large (first row in the figure), we really only see the very large-scale structure of the jet separating the northern and southern parts of the fluid. Decreasing is then similar to using a magnifying glass: if we look closer, we identify smaller individual eddies in the northern and southern part of the flow. The second useful property of OPTICS is that not every point has to be part of a cluster. In fact, in the second and third rows of the figure, the grey points are identified as noise, i.e. they do not belong to any coherent set. This is different from many recent approaches that rely on graph partitioning algorithms for cluster detection. There, every point has to be part of a coherent set, which strongly limits the applicability to realistic geophysical flows. In our article, we apply OPTICS also to modelled trajectories in the Agulhas region, and find, as expected, Agulhas rings.

We show in our paper that a 20-year-old algorithm can be very successful in detecting finite-time coherent sets, even in a purely data-driven form, i.e. with very little additional heuristics or pre-processing of the data. It might well be that there exist even better algorithms that are suited for research questions in fluid dynamics. The Lagrangian fluid dynamics community should therefore explore more existing methods and algorithms from data sciences as these have the potential to greatly improve our understanding of fluid flows.

In geophysics, forecasting is based on solving the equations of physics with the help of a computer. To calculate a forecast we need an initial condition. Estimating this condition is difficult, however, because in general the observations available are few or heterogeneously distributed in space and time. To achieve this, a reference algorithm is given by the Kalman filter. This algorithm is based on temporal propagation during the prediction stage; and the updating of the prediction and analysis error covariance matrices during the analysis stage.

While the formalism of the Kalman filter is based on simple formulas of linear algebra, its practical implementation faces two pitfalls :

Indeed, in the systems of interest to us, these matrices are very large and it is impossible to compute their temporal propagation during forecasting. The ensemble Kalman filter, by approximating the covariance matrices by an ensemble estimation, offers a way to propagate the covariances from the forecast of each member of the ensemble.

Secondly, while part of the forecast error statistics can be explained by the propagation of initial uncertainties – the predictability error – , another contribution, linked to the defect in the numerical model – the model error – is much more difficult to characterize. To go further in understanding model error, the working hypothesis of a decorrelation between the predictability error and the model error is often introduced, which then leads to decompose the prediction error covariance matrix as the sum of the predictability error and model error covariance matrices. Despite this assumption it is very difficult to characterize the model error statistics, and moreover the predictability error is never decorrelated from the model error.

The objective of the paper is to characterize the model-error covariances related to the discretization of physics equations, i.e. the errors that emerge during the transition from the mathematical formalism of physics equations to their implementation on a computer and then to their numerical resolution. To achieve this, we relied on a new method: the parametric Kalman filter (PKF). The PKF is an implementation of the Kalman filter, in which covariance matrix are approximated by a covariance model characterized by a set of parameters (Pannekoucke et al. , 2016; Pannekoucke et al. 2018). Thus, by describing how the parameters evolve over time, we can describe, in an approximate way, the evolution of the full covariance matrix.

By revisiting the formalism of the model-error covariance matrix and using the PKF, we have characterised one of the observed defects in the prediction of the chemical composition of the atmosphere where it appeared that the variances of the forecast error estimated by an ensemble decrease abnormally with time, a phenomenon called the loss of variance. We have shown that this loss of variance is related to a diffusive effect – the origin of which appears when determining the modified equation which is the differential equation whose solution is the numerical prediction – as shown in Figure 1. The panel (a) shows the real evolution of the concentration of a chemical compound transported by a heterogeneous wind and (b) the numerical prediction calculated from a simple numerical scheme: it is observed that the intensity of the numerical solution decreases abnormally compared to the theoretical solution. With the PKF equations, we have characterized the evolution of the model error variance (panel c), which we have shown to be coupled with the evolution of the anisotropy of the correlation functions (characterized by the significant correlation scale, panel d). This is the first time that we have been able to characterize the properties of the model error covariance matrix linked to the defects of the numerical resolution scheme.

This article has not only enabled us to understand a behaviour observed in practice (here the loss of variance), but above all it has opened up a new theoretical avenue of exploration for the characterization of model error covariances.

**References**

O. Pannekoucke, S. Ricci, S. Barthelemy, R. Ménard, and O. Thual, “Parametric Kalman Filter for chemical transport model,” Tellus, vol. 68, p. 31547, 2016, doi: 10.3402/tellusa.v68.31547.

O. Pannekoucke, M. Bocquet, and R. Ménard, “Parametric covariance dynamics for the nonlinear diffusive Burgers’ equation,” Nonlinear Processes in Geophysics, vol. 2018, pp. 1–21, 2018, doi: https://doi.org/10.5194/npg-2018-10.

]]>The reality is that climate science can answer such criticisms and explain why, in a context of established global warming, cold waves can still be observed. Before explaining the mechanisms that still trigger snowy and cold events in warmer climates, it is important to understand that scientists agree on the fact that these events will become less frequent in the future than they are now. A few recent studies shown that we can already detect a strong signal in the decrease of average snowfall in most of Europe as well as an increase of temperature minima. Furthermore, despite cold spells that could affect large regions of the northern hemisphere during the boreal winter, the Jan 2021 monthly global temperature average anomaly with respect to the 1981-2010 period is positive over the majority of the globe. A new study suggests that the fingerprint of climate change can be detected from any single day in the observed global record since early 2012. Yet there exist at least four reasons that can still produce cold waves at mid-latitudes.

1. The variability and chaoticity of atmospheric motions can entrain cold air from polar latitudes to mid-latitudes via the disruption of the so-called polar vortex. The polar vortex is a compact area of low pressure and cold temperatures located at Northern latitudes. It is particularly strong in the Boreal winter because of the large temperature gradients from the North pole to the equator in that season: the pole receives little radiation in winter because of the darkness. Chaotic fluctuations in atmospheric motions can break the vortex and bring polar air at low latitudes as it just happened in winter 2020/2021.

2. Polar night will forever exist and so the polar vortex will be capable of advecting relatively cold air from northern to southern latitudes even in warmer future climates.

3. Climate change may have secondary effects on the polar vortex, for example inducing a more wavy behaviour which would favour cold waves at Southern latitudes. This hypothesis is highly debated in the climate science community and no consensus has been formed so far.

4. Ground to mid-troposphere temperature gradients may enhance snowfall precipitation especially on the proximity of warmer seas. This could happen in the future if land and sea warm-up faster than the atmosphere and this effect might be locally important where seas are warming faster than the rest of the globe as for example in the Mediterranean.

Besides the previous points one has to keep in mind that any statements made on climate change using a single event is meaningless as climate is constructed as a statistics of multiple events: one swallow doesn’t make a summer.

]]>Upwelling regions are being affected by climate change in several ways. One of the most important is the change in the wind regime that controls the upwelling dynamics. Another relevant effect is the modification of phytoplankton communities, which constitute the base of the food chain that sustains higher trophic levels including zooplankton and fish stocks. To find out how these consequences of climate change could affect the oxygen concentrations in the Iberian Peninsula Upwelling System, we performed numerical simulations to study the response of oxygen concentration levels to changes in wind patterns and phytoplankton species.

In coastal upwelling systems, winds blowing equatorward cause surface waters to move offshore, being replaced by colder, nutrient rich deep waters. These nutrient rich waters promote the growth of phytoplankton that supports higher trophic levels. When cold waters reach the surface, a temperature front is formed a few tenths of kilometers off the coast. This front is unstable and wind fluctuations quickly cause the formation of vortices and filaments that can extend hundreds of kilometers offshore. Our study focused on this dynamical environment, where three elements control the oxygen concentrations: the atmosphere, the biological activity and the internal upwelling dynamics. The air-sea oxygen fluxes are determined using the wind speed and the interface oxygen concentrations; the living organisms can produce oxygen through photosynthesis and consume it by respiration; the upwelling dynamics can either bring waters depleted/enriched in oxygen from the subsurface or transport offshore oxygen rich/poor waters through the frontal instabilities (Figure 1a). It is the balance between all these factors that ultimately determines the oxygen concentration levels (Figure 1b).

We developed a coupled physical-biogeochemical model that accounts for all these elements and used it in simulations of an idealized Iberian Peninsula Upwelling System where upwelling winds and phytoplankton growth rate were modified, reflecting observed and predicted changes of these two factors. Our results (Figure 1b) suggest that oxygen levels in the Iberian Peninsula Upwelling would decrease with wind blowing for long periods of time, because the sustained upwelling dynamics would carry oxygen rich waters offshore, or with phytoplankton communities dominated by species growing slowly, since the photosynthetic production would be lower.

As of now, the Iberian Peninsula Upwelling System is relatively well oxygenated, but our results demonstrate that the expected trends in wind regime and phytoplankton community modification may be, if confirmed, damaging for the coastal ecosystem of the Iberian Peninsula and for the socio-economic activities that depend on it.

Unless differently specified, all seminars will be hosted via the Zoom platform at 2:30 pm Central European Time.

Access details are provided upon free registration here.

Website: https://sites.google.com/view/perspectivesonclimate/home-page.

**Programme:**

**20.01.2021 → Brian Hoskins (Imperial)**: “Potential Vorticity”

**27.01.2021 → Klaus Hasselmann (MPI-M)**: “Klaus Hasselmann’s perspectives on climate: an interview”

**03.02.2021 → Susan Solomon (MIT)**: “The Scientific And Policy Challenges Of The Antarctic Ozone Hole: A Global Success Story”

**10.02.2021 → Kerry Emanuel (MIT)**: “History of the Scientific Understanding of Hurricanes”

**24.02.2021 → Eugenia Kalnay (UMD)**: “It’s not just Climate Change”

**03.03.2021 → David Ruelle (IHES)**: “Chaos Theory: The Multidisciplinary Origins”

17.03.2021 → Pascale Braconnot (LSCE-IPSL): “Paleoclimate modeling to test climate feedbacks and variability”

**24.03.2021 → Berengere Dubrulle (LSCE-IPSL)**: “On the concept of energy cascades in turbulence: from Richardson/Kolmogorov picture to multifractal and beyond”

**31.03.2021 → Giovanni Jona-Lasinio (Sapienza)**: TBA

Everyone welcome to attend and we hope this kind of activity could be of benefit for all of you!

The organizing committee

Tommaso Alberti (INAF-IAPS, Italy)

Lesley De Cruz (RMI, Belgium)

Christian Franzke (Jacobs University, Bremen, Germany)

Vera Melinda Galfi (Uppsala University, Sweden)

Valerio Lembo (ISAC-CNR, Italy)

Ensemble Forecasting rose with the understanding of the limited predictability of weather. In a perfect ensemble system, the obtained ensemble of forecasts expresses the distribution of possible weather scenarios to be expected. However, operational forecasts of near-surface weather elements are often underdispersive and underestimate forecast errors.

At Deutscher Wetterdienst (DWD) a MOS system has been developed, that corrects for systematic errors of the numerical ensemble systems ECMWF-ENS and COSMO-D2-EPS. It calibrates probability forecasts to observed relative frequencies with a focus on severe weather. The calibrated event probabilities can be used for qualified decisions in terms of cost-loss evaluations that relate forecasts of harmful weather to economic value.

The basic concept of the MOS system presented in the paper is to use ensemble mean und spread as predictors in multiple linear and logistic regressions. The use of ensemble products as predictors instead of processing each ensemble member individually prevents difficulties with underdispersive statistical results and underestimated errors especially for longer forecast horizons. During multiple regressions, the system selects most relevant predictors based on statistical tests and is able to correct even for conditional biases, therefore. It is possible to use the latest available observations as predictors for short-term forecasts or to use the previous statistical forecasts as predictor for the next time step in order to exploit the persistency of the weather.

Extreme events are most relevant for meteorological warnings. They are (fortunately) rare, however, and long time series are required to capture a sufficiently large number of observed events in order to derive statistically significant estimations. Currently, time series of 8 years of ensemble and observation data are used for training. Model changes have been found less harmful than insufficient data. Extreme events with 40 mm or 50 mm precipitation per hour rarely appear at certain stations, which may result in statistical models that permanently predict 0% probability. Clusters of stations are gathered for a combined modelling of rare events, therefore. The clusters are defined according to the climatology of the stations. In order to compute statistical forecasts on a regular grid, the MOS-equations of the clusters are evaluated for locations apart from the training and observation sites. Figure 1 shows resulting gridded forecasts of wind gust probabilities as an example. Strong gusts appear most probably over sea and at higher elevations according to this forecast.

]]>Dylan is a postdoctoral fellow within the Oceans and Atmosphere business unit of CSIRO (Australia). His current research focuses on methods for learning reduced-order models from data, and their applications in studying causal relationships in the climate system.

Terry is leader of the climate forecasting team at CSIRO. Along with Adam Scaife (UK Met Office), he is a current co-chair of the WCRP Grand Challenge in Near Term Climate Prediction. His current interests and research are in coupled data assimilation and ensemble prediction, climate dynamics and causality, and application of statistical dynamics to geophysical fluids.

A familiar challenge in climate science is the need to extract information from very high-dimensional datasets. To do so, the first step is usually the application of a method to reduce the dimension of the data down to a much smaller number of features – that is, combinations of the original variables – that are more amenable to study. The importance of identifying a small set of features that best capture the salient information in the data was recognized early on by Lorenz, among others, whose work on the use of so-called empirical orthogonal functions (EOFs) in statistical weather prediction provided the impetus for widespread adoption of the technique among meteorologists and climate scientists. Nowadays, EOF analysis is one of the most frequently used exploratory tools in the climate scientist’s toolbox.

In the intervening years since Lorenz considered the problem, an extensive literature has developed on a wide range of dimension reduction methods where typically some additional pre-filtering of the data is applied before targeting features relevant to the chosen spatio-temporal scales. Examples from this diverse set of methods include vector quantization, based on clustering methods such as k-means, which encodes a given datapoint by assigning it the label of the closest member in a set of a small number of prototypical observations. EOFs represent the data in terms of linear combinations of orthogonal basis vectors, while archetypal analysis (AA) uses basis vectors chosen to lie on the convex hull – the observed “extremes” – of the data. Although conceptually all of these methods are quite different, they may all be formulated in terms of finding a factorization of the observed design matrix into lower rank factors that optimizes a particular objective function, subject to different constraints on the optimal factors.

An important consideration in choosing among different such matrix factorizations is how meaningful the resulting representation will be in context; for instance, can the results be directly mapped to distinct physical modes of variability? In “Applications of matrix factorization methods to climate data”, we highlight this through a set of case studies in which the relevant features manifest in dramatically different ways, meaning that certain methods tend to be more useful than others. In sea surface temperature (SST) data, key modes such as El Nino correspond to large temperature anomalies. As a result, describing an SST map in terms of a basis of extreme points, as provided by AA or by related convex codings, is effective in extracting recognizable physical modes. This is not the case when the features of interest do not lie on the boundaries of the observed data, as in the example of quasi-stationary weather patterns. Since these structures are characterized by their recurrent and persistent dynamics, vector quantization methods are more easily interpreted. The accompanying figure shows a four-dimensional basis that results from using k-means, AA (second column), and two different convex codings (third and fourth columns) on Northern Hemisphere geopotential height anomalies. Where a prototypical blocking pattern is clearly evident (cluster 3) in the k-means basis, only by imposing some level of regularization (as in the fourth column) do the methods based on convex encodings yield a similarly direct identification of blocking events.

With the development of many alternative dimension reduction techniques, the climate scientist’s toolbox is increasingly well equipped for extracting and summarizing complex structural information from large, high-dimensional datasets. We emphasize that it is essential when selecting a method to take into account the nature of the representation that results, and how well this aligns with the features of interest. Or, in other words, to choose the right tool for the job.

]]>