NH
Natural Hazards

Gabriele Amato

My name is Gabriele Amato I am from Rome (Italy) where I studied geology and where I have just finished my PhD, at Roma Tre University. My research is about landslide monitoring through terrestrial and satellite techniques, in different geological and geomorphological contexts. The aim of my PhD was to relate the landslides movement to their triggering factors (rainfall, earthquakes, temperature variations). I am passionate about natural hazards and methods to manage them, especially remote sensing based. In my role as author of the NH blog I am looking forward to sharing course and initiatives organised all around the world in the field of natural hazards with the scientific community, and especially young researchers, since I think these represent great opportunities for networking.

Earthquake-induced landslides and the ‘strange’ case of the Hokkaido earthquake

The population of many countries in the world is exposed to earthquakes, one of the most destructive natural hazards. Sometimes, consequent triggered  phenomena can be even worse than the earthquake itself. In this context, earthquake-induced landslides often concur in life and economic losses. To better understand these induced phenomena, updated catalogues of their types and location of occurrence are fundamental. In his works, Dr David K. Keefer performed several interesting statistical analysis, which highlighted how the magnitude and the distance from the epicentre play a key role in triggering earthquake-induced landslides (Figs. 1 and 2). In particular, he showed that the number of landslides induced by earthquakes decreases with the increase in distance from the epicentre (Fig.1) and that the number of landslide increases with larger magnitude events (Fig. 2). [Read More]

How to study Mega-earthquakes? By generating them!

Dr. Francesca Funiciello

Francesca Funiciello is an Associated Professor at Roma Tre University (Rome, Italy). Her research interests are, among others, geodynamics, seismotectonics, rheology of analogue materials and science communication. She leads an active and young research group composed by Fabio Corbi, Silvia Brizzi and Elenora van Rijsingen, and collaborates with many other young and experienced researchers in Europe. The main activities of Francesca, Fabio, Silvia and Elenora involve analogue and numerical modelling of subduction zones, geophysical data analysis and geostatistics in the field of mega-earthquakes.

 

 

  1. Hi guys, can you tell us a bit more about “mega-earthquakes” and why it is so important to study them?

The interface between the subducting and overriding plates (Fig.1), the so-called megathrust, hosts the largest earthquakes on our planet Earth. They are generally called mega-earthquakes, with the prefix ‘mega’ highlighting both the fault originating them and their size. A quite recent example of a mega-earthquake is the Sumatra-Andaman event that occurred in 2004. The length of the fault that ruptured was ca. 1000 km and it generated a magnitude in the range of Mw 9.1–9.3 (Lay et al., 2005; Stein & Okal 2005; Subarya et al., 2006; Fujii & Satake 2007), where Mw denotes moment magnitude, a logarithmic measure of earthquake size. There had not been an event so large since the 1964 Alaska earthquake. The energy released during the Sumatra-Andaman 2004 event was in the range 5–10×1022 Nm—equivalent to the sum of the moment of all earthquakes in the preceding decade, worldwide (Lay et al., 2005).

 

Figure 1 – Schematic section through a subduction zone. The interface between the overriding and subducting plate is the so-called megathrust. The red star highlights the hypocenter of a megathrust earthquake (courtesy of S. Brizzi).

 

Subduction mega-earthquakes (together with the tsunamis they may generate) are among the largest hazards for human life, considering that millions of people live in proximity of subduction zones (e.g., the NE-Japanese and South American subduction zones), which are located at the edges of the Pacific Ocean.

 

  1. Which approach does the scientific community adopt to study mega-earthquakes? 

[Read More]

The fantastic world of OBIA!

For today blog, we have interviewed Clemens Eisank about OBIA and its application in Natural Hazard.

Dr. Clemens Eisank is Remote Sensing Specialist & Project Manager at GRID-IT Company in Innsbruck (Austria). He obtained his Ph.D. from the Department of Geoinformatics – Z_GIS at Salzburg University in 2013. In his Ph.D. research, he proposed a workflow for automated geomorphological mapping with object-based image analysis (OBIA) methods. His research interests include remote sensing based mapping/monitoring of natural hazards and the development/automation of the related information extraction workflows and tools. During his career, he has worked on several research projects on natural hazards related topics.

 

1) Hi Clemens, can you tell us what OBIA is in simple words?

OBIA is short for Object-Based Image Analysis. OBIA is a powerful framework for the analysis and classification of gridded data, especially images. A typical OBIA workflow includes two steps:

  1. Segmentation to generate so-called “objects”. Objects are created by merging adjacent grid cells (pixels) of the input grid layer(s) based on specific criteria. For merging pixels into objects, many segmentation algorithms evaluate the similarity of pixel values against a user-defined threshold (fig. b).
  2. Classification of objects. Objects are defined by a plethora of attributes, including spectral (e.g. pixels mean brightness), geometrical (e.g. maximum slope) and spatial properties (e.g. relative border to class X). Based on statistical learning or knowledge models the best set of attributes is identified for each target class and used for object-based classification of the input grid layers (fig. c).

 

Image in fig. a is segmented in different objects (fig. b), which are classified as one class, i.e. “moraine deposits” (fig. c). Image credit: Gabriele Amato & GRID-IT Company.

In general, the segmentation and classification steps are applied in a cyclic manner to obtain an accurate classification result. Compared to pixel-based classification, OBIA results are more realistic, more accurate and visually more appealing, since the “salt-and-pepper” effect, which is typical in pixel-based classifications, is avoided. Moreover, classification results are typically in vector format allowing for straightforward integration with other GIS layers.

One reason for the success of OBIA may be the fact that it mimics human perception: humans perceive the world as an assemblage of discrete entities such as trees, mountains, buildings; they name these entities and distinguish them by properties such as colour, shape and spatial setting. In the OBIA world, objects are the digital representation of the perceived real-world entities, and the digital properties can be directly associated with the properties that humans use to distinguish different categories of real-world entities.

2) Why do you think OBIA can give a contribution in the field of Risk Assessment?

For a proper risk assessment, a comprehensive geodatabase with all kinds of layers ranging from terrain data and geology to images has to be established for the region of interest. OBIA is a great framework for integrating all these data coming at different spatial resolutions and for extracting the relevant risk information (e.g. risk zones), especially via the use of “objects”. By analysing multi-temporal data, natural-hazard objects such as landslides can be identified as polygons or the evolution of natural hazard objects can be monitored, including the change in attributes such as shape, which may help to improve the understanding of the relevant surface processes.  Other application scenarios are that natural hazard polygons, which have been extracted by OBIA, are used (1) as constraining areas for optimization of susceptibility models or (2) as basis for the mapping of risk “hot spots”.

3) In which kind of Natural Hazard do you think OBIA can provide the best performance?

OBIA performs best in detection scenarios’ changes:  in other words  when two or more images of the same area are compared. One prominent application example is the automated mapping of a new landslide to create event-based landslide inventories: a post-event image is segmented (ideally, in combination with terrain layers) and bare soil objects are automatically extracted using spectral, terrain and other thresholds. The extracted bare soil objects are put on top of a pre-event image and pre-event object properties are recorded. If a significant difference between pre- and post-event object properties (e.g. NDVI) is observed, these objects are regarded as new landslide areas and added as polygons to the existing landslide inventory. By repeating this extraction, a complete detailed landslide inventory can be established at national and regional scale. Such an inventory will be more objective than inventories usually created by multiple people with different background and experience.

4) In this regard, could you tell us something about the research projects you are involved in?

We have just finished a project on improving object-based landslide mapping (Land@Slide). The focus was on increasing the robustness and flexibility of object-based landslide mapping algorithms. These algorithms  should deliver high-quality landslide mapping results for different kinds of optical satellite data with varying spatial and spectral resolution. In close cooperation with potential end-users, we gathered the desired requirements and identified application scenarios ranging from landslide rapid mapping to inventory mapping. The improved algorithms were implemented in a prototypical web processing service, which allows the users to map landslides online by themselves.

Currently, I am managing the MorphoSAT project which can be seen as follow-up project of my PhD research. The idea is to bring digital geomorphological mapping to the next level. The motivation of this research is that digital geomorphological information is required for many applications (also in natural hazards), but is only rarely available for most regions of the world. We are positive that we can provide improved algorithms based on OBIA for automated extraction of geomorphological features (e.g. landforms, process domains) in digital terrain layers.

5) Which are the OBIA future perspectives?

I think the impact of OBIA will constantly increase in the future. OBIA has proven to be a powerful framework for integrative 2D data analysis. However, additional functions and methods are needed to analyse data in 3D and 4D, especially for multi-temporal LiDAR point clouds. OBIA will be of great value also for tracing objects in video streams such as the already recorded ones by mini satellites. We will hopefully also see new OBIA software tools that can reach the quality of commercial products. Especially in the open-source world, OBIA tools are still rare. Free tools are needed to widen the user community and to strengthen the position of OBIA as an innovative geodata analysis framework, also in the field of natural hazards.