A robust estimation of probabilities of extreme floods is the Holy Grail in flood hydrology in view of limited available observations, variability of climate, and complexity of flood generation processes in catchments. Flood frequency hydrology, spearheaded in the past decades by Ralf Merz and Günter Blöschl, offers a powerful toolbox to enhance the reliability of flood probability estimates by considering past historical floods (temporal information expansion), learning from similar neighbouring catchments that have longer observational records (spatial information expansion), and accounting for the different frequency of various flood types (causal information expansion).
However, climate change comes as an additional hurdle – it undermines the fundamental stationarity assumption in our traditional extreme value statistics and challenges flood frequency hydrology. We face a situation in which already limited past observations become even less suitable for guiding us into the future. Additionally, changes in dominant flood types may affect spatial and causal information expansion. Estimates of flood probabilities in future climate are urgently needed for adjustment of flood protection infrastructure (e.g., dikes, dam spillways), flood hazard and risk maps, and flood risk management plans. So, how can we move forward?
Ingredients for “cooking” future flood probabilities
Climate models offer plenty of scenario simulations, yet they all have their limitations in terms of spatial resolution and length of time series, covering a specific time period which can be assumed stationary. This is, however, needed for classical extreme value statistics.
For example, a 30-year period in the future climate is typically covered by a single scenario / model realization which is equivalent to a 30-year observational record – certainly not enough for a robust estimation of the probability of extreme floods. Statistical hydrology excelled in the development of weather generators – stochastic models trained on observed or simulated data to produce very long synthetic weather series. They can bridge spatial scales and generate weather fields within the assumed range of climate variability, retaining the key statistics of training datasets. Finally, hydrological models are continuously advancing by incorporating sophisticated process descriptions, discretizations, and parameter optimizations.
These three ripe fruits are our key ingredients for “cooking” the probabilities of future floods. The “mixing pot” for this shake was actually designed long ago by Peter S. Eagleson in 1972 in his seminal paper laying the foundation of the Derived Flood Frequency Analysis (DFFA), where flood probability distributions were modelled based on the distribution of climatic and catchment variables. Basically, a hydrological model driven by climate variables delivers the simulated flood series for statistical analysis.
So, what is the recipe?
Bringing ingredients into a mixing pot
In DFFA, we drive a hydrological model for a catchment of interest with sufficiently long weather series to obtain an empirical distribution of flood flows, as was for instance shown by Sarka Blazkova and Keith Beven for present climate conditions.
The central idea for the cooking recipe of future flood probabilities is to inform a stochastic weather generator about the future climate state as simulated by deterministic climate models. In the work by Viet Dung Nguyen and colleagues, we developed such a climate-informed weather generator by conditioning the precipitation of every single day on large-scale circulation patterns and on regional average daily temperature. This reflects the dynamic and thermodynamic states of the atmosphere and largely contains the climate change signal.
Long, synthetically generated weather fields representing a future climate state drive a time-continuous hydrological model to derive empirical flood distributions.
The beauty of the recipe
The beauty of this recipe has several facettes.
First, the weather generator is informed by climate model variables such as atmospheric pressure and surface temperature simulated with high reliability. Both also carry a large share of information on climate change signals relevant for flood generation.
Second, the weather generator not only produces very long weather series, which are hardly ever available from physically-based climate model simulations. It can also directly bridge the scale between global climate models and local weather, acting as a downscaler. This shortcut makes it computationally very attractive in comparison to regional climate simulations.
Finally, time-continuous hydrologic simulations in the order of several thousand years of daily time-steps, driven by the weather generator, implicitly integrate temporal, spatial, and causal information expansion into the derived flood frequency. The temporal expansion is straightforward and results from long-term simulation. The spatial expansion emerges from the spatial dependence structure of weather variables, i.e., the weather generator learns to produce heavy rainfall at locations close to observed heavy storms. Flood types emerge in the hydrological simulations with their respective frequencies through the combination of atmospheric drivers and catchment state evolution.
I believe that with this recipe, we have found an elegant way to estimate future flood probabilities by leveraging the advances of climate science, statistical hydrology, and hydrological modelling, following the guiding light of flood frequency hydrology. Yet, the robustness of the approach needs further thorough evaluation.