NH
Natural Hazards

Natural Hazards

The fantastic world of OBIA!

For today blog, we have interviewed Clemens Eisank about OBIA and its application in Natural Hazard.

Dr. Clemens Eisank is Remote Sensing Specialist & Project Manager at GRID-IT Company in Innsbruck (Austria). He obtained his Ph.D. from the Department of Geoinformatics – Z_GIS at Salzburg University in 2013. In his Ph.D. research, he proposed a workflow for automated geomorphological mapping with object-based image analysis (OBIA) methods. His research interests include remote sensing based mapping/monitoring of natural hazards and the development/automation of the related information extraction workflows and tools. During his career, he has worked on several research projects on natural hazards related topics.

 

1) Hi Clemens, can you tell us what OBIA is in simple words?

OBIA is short for Object-Based Image Analysis. OBIA is a powerful framework for the analysis and classification of gridded data, especially images. A typical OBIA workflow includes two steps:

  1. Segmentation to generate so-called “objects”. Objects are created by merging adjacent grid cells (pixels) of the input grid layer(s) based on specific criteria. For merging pixels into objects, many segmentation algorithms evaluate the similarity of pixel values against a user-defined threshold (fig. b).
  2. Classification of objects. Objects are defined by a plethora of attributes, including spectral (e.g. pixels mean brightness), geometrical (e.g. maximum slope) and spatial properties (e.g. relative border to class X). Based on statistical learning or knowledge models the best set of attributes is identified for each target class and used for object-based classification of the input grid layers (fig. c).

 

Image in fig. a is segmented in different objects (fig. b), which are classified as one class, i.e. “moraine deposits” (fig. c). Image credit: Gabriele Amato & GRID-IT Company.

In general, the segmentation and classification steps are applied in a cyclic manner to obtain an accurate classification result. Compared to pixel-based classification, OBIA results are more realistic, more accurate and visually more appealing, since the “salt-and-pepper” effect, which is typical in pixel-based classifications, is avoided. Moreover, classification results are typically in vector format allowing for straightforward integration with other GIS layers.

One reason for the success of OBIA may be the fact that it mimics human perception: humans perceive the world as an assemblage of discrete entities such as trees, mountains, buildings; they name these entities and distinguish them by properties such as colour, shape and spatial setting. In the OBIA world, objects are the digital representation of the perceived real-world entities, and the digital properties can be directly associated with the properties that humans use to distinguish different categories of real-world entities.

2) Why do you think OBIA can give a contribution in the field of Risk Assessment?

For a proper risk assessment, a comprehensive geodatabase with all kinds of layers ranging from terrain data and geology to images has to be established for the region of interest. OBIA is a great framework for integrating all these data coming at different spatial resolutions and for extracting the relevant risk information (e.g. risk zones), especially via the use of “objects”. By analysing multi-temporal data, natural-hazard objects such as landslides can be identified as polygons or the evolution of natural hazard objects can be monitored, including the change in attributes such as shape, which may help to improve the understanding of the relevant surface processes.  Other application scenarios are that natural hazard polygons, which have been extracted by OBIA, are used (1) as constraining areas for optimization of susceptibility models or (2) as basis for the mapping of risk “hot spots”.

3) In which kind of Natural Hazard do you think OBIA can provide the best performance?

OBIA performs best in detection scenarios’ changes:  in other words  when two or more images of the same area are compared. One prominent application example is the automated mapping of a new landslide to create event-based landslide inventories: a post-event image is segmented (ideally, in combination with terrain layers) and bare soil objects are automatically extracted using spectral, terrain and other thresholds. The extracted bare soil objects are put on top of a pre-event image and pre-event object properties are recorded. If a significant difference between pre- and post-event object properties (e.g. NDVI) is observed, these objects are regarded as new landslide areas and added as polygons to the existing landslide inventory. By repeating this extraction, a complete detailed landslide inventory can be established at national and regional scale. Such an inventory will be more objective than inventories usually created by multiple people with different background and experience.

4) In this regard, could you tell us something about the research projects you are involved in?

We have just finished a project on improving object-based landslide mapping (Land@Slide). The focus was on increasing the robustness and flexibility of object-based landslide mapping algorithms. These algorithms  should deliver high-quality landslide mapping results for different kinds of optical satellite data with varying spatial and spectral resolution. In close cooperation with potential end-users, we gathered the desired requirements and identified application scenarios ranging from landslide rapid mapping to inventory mapping. The improved algorithms were implemented in a prototypical web processing service, which allows the users to map landslides online by themselves.

Currently, I am managing the MorphoSAT project which can be seen as follow-up project of my PhD research. The idea is to bring digital geomorphological mapping to the next level. The motivation of this research is that digital geomorphological information is required for many applications (also in natural hazards), but is only rarely available for most regions of the world. We are positive that we can provide improved algorithms based on OBIA for automated extraction of geomorphological features (e.g. landforms, process domains) in digital terrain layers.

5) Which are the OBIA future perspectives?

I think the impact of OBIA will constantly increase in the future. OBIA has proven to be a powerful framework for integrative 2D data analysis. However, additional functions and methods are needed to analyse data in 3D and 4D, especially for multi-temporal LiDAR point clouds. OBIA will be of great value also for tracing objects in video streams such as the already recorded ones by mini satellites. We will hopefully also see new OBIA software tools that can reach the quality of commercial products. Especially in the open-source world, OBIA tools are still rare. Free tools are needed to widen the user community and to strengthen the position of OBIA as an innovative geodata analysis framework, also in the field of natural hazards.

“Twenty or more Leagues Under the Sea”: A journey to understand submarine canyons

“Twenty or more Leagues Under the Sea”: A journey to understand submarine canyons

As NhET, we have the pleasure to have Mauro Agate as our guest and interviewee today. We discuss about submarine canyons and related geo-hazards. Further details will be available at: http://www.sciencedirect.com/science/article/pii/S0967064513002488  for a scientific-oriented audience;

or https://www.youtube.com/channel/UCwWErQNoZYpJhhkxPa82x5g for a broader audience.

 

Dr. Mauro Agate is a marine geologist and a stratigrapher working at Earth and Marine Sciences Department of Palermo University (Italy) where he teaches Marine Geology and Sedimentology. He has taken part in more than 25 oceanographic cruises in the central Mediterranean Sea. He focuses on: i) effects of sea level change on sedimentary dynamics of shelf-to-slope systems; ii) geomorphological and geological mapping of seafloor; iii) origin and evolution of submarine canyon systems; iv) tectono-sedimentary evolution of continental margins.

He has contributed to several research projects: VECTOR (Vulnerability of Italian marine coasts and ecosystems to climatic change); CARG (Geological mapping); MAGIC (Marine geohazards along the Italian coasts); EMODNet (European marine observation and data network).

 

 

Interview:

1) Which is the main topic of the research we are going to discuss today?

I will focus on submarine canyons as they are one of the most widespread morphologies shaping the ocean floor. These erosional features may extend seawards across continental margins for hundreds of kilometres (> 400 km) and occasionally have canyon wall heights of up to 5 km, from canyon floor to canyon rim. Submarine canyons play a key role in the framework of oceanographic and sedimentary processes, acting as conduits for the transfer of sediment from mainland to the deep sea, and controlling meso-scale oceanographic circulation as well as controlling the functioning of specific benthic habitats.

2) How do submarine canyons originate?

For years, the origins of submarine canyons have been the subject of debate among investigators, and various ideas have been proposed. As already suggested by Shepard in a famous paper dated 1981, multiple causes may contribute to the origin of a canyon. Canyons are fundamentally erosive features, yet some of them show a very complex evolution characterized by alternating erosive and depositional stages.

It is important to separate two main types of submarine canyons, because the origin, evolution and quality of related ecosystems are very different: a) shelf-indenting, sediment fed canyons and b) slope-confined, retrograding canyons. Marine geological and geophysical research documented as slope-confined canyons can retrogressively develop up to the shelf edge, and recent studies, also based on numerical modelling, suggested that these canyons formerly originated from downslope-eroding sediment flows. Some shelf-indenting canyons may cross the continental shelf in its entirety and link to current fluvial networks. Submarine canyons display many similarities with exposed river systems, but also relevant differences.

3) Why it is important to understand submarine canyon origins and evolution?

The unravelling of submarine canyon dynamics has been driven by the need to plan safe routes along which to place cables and pipelines across the seafloor. Often, at the down-slope end of the canyon, sedimentary submarine fans may occur. These represent modern analogues for ancient deposits of economic significance (hydrocarbon source-rock and reservoir). Moreover, oceanographic implication of canyon activity, by mixing of shallow water with upwelling of deep water can also enhance local primary productivity: consequently, commercially relevant fisheries are commonly located at the heads of submarine canyons.

Further on, among deep ocean geomorphic features, the heads of some shelf-incising submarine canyons have been identified as supporting ecosystems (e.g. cold-water coral communities). These are especially vulnerable to human activities, mostly as consequence of water acidification caused by anthropogenic climate change and bottom trawl fisheries. In particular, the trawling practise could have an enormous impact on canyon dynamics by altering deep-sea sediment transport pathways and ecosystems.

Ultimately, a more complete understanding of the canyon activity may help us in preventing some natural and anthropogenic hazards.

4) What types of hazards are related to the underwater canyons?

Two main types of hazards are associated with the presence of submarine canyons and their related processes: landslides and dispersion of pollutants.

Submarine slides can be generated by the failure of canyon wall and head-scarp. Usually mass wasting phenomena occurring inside the canyon are not very large in size. However, they can be dangerous for cables and pipelines. Moreover, in some canyons, the headward erosion driven by downslope-cutting sediment flows and the following landward shift of the canyon head may come to threaten harbor facilities or other anthropic settlements located along the coast. In 1979 a landslide in the head of the Var Canyon in the Gulf of Lions, involved a volume of about 9 million m3 of material and caused a tsunami wave of about three meters that damaged part of the work to extend the airport of Nice and killed 10 people. Similarly, in southern Italy, along the Tyrrhenian coast of Calabria, in 1977 a submarine landslide in the head of Gioia Tauro canyon mobilized about 5 million m3 of material generating a tsunami and a turbidity current which caused serious damage to the port and the break of submarine cables. Even away from settled regions, the retreat of the canyon heads can cause extensive damage such as the sudden disappearance of entire stretches of sandy coastline.

If the canyon walls and head were stable, however, the same morphology of a submarine canyon could represent a threat in case of earthquakes or tsunamigenic landslides because the bathymetric pattern of the canyon can amplify (or simply not mitigate) a possible tsunami wave. Such different effects of sea floor bathymetry on tsunami characteristics have been documented. Examples are the tsunami in 1998 affecting the island of Papa New Guinea and the Indian Ocean tsunami on 26 December 2004 affecting on the Bangladesh coast.

As concerns dispersion of pollutants, at present not many studies have been carried out. However, there is growing evidence that sediment transport along the canyon can contribute to contamination of pelagic sediments by industrial waste and chemical pollutants coming from coastal areas and subsequent accumulation in deep-sea fauna.

5) What are the most advanced methodologies for the investigation of submarine canyons and what further discoveries do you expect from the upcoming research?

During the past two decades, the wide-spread use of the Multibeam echo-sounder devices in underwater geological surveys have provided wonderful images of the seabed. This allowed for quantitative morphometric analysis of submerged geomorphological features, among which the submarine canyons. We currently know very well the shape, morphologies and sizes of these fascinating features. Probably further advances in understanding the mechanisms of the canyon function can only come from multidisciplinary research that integrates geophysical, sedimentological, oceanographic and biological analyses. A multidisciplinary approach, such as the one followed in the ISLAND Project (ExplorIng SiciLian CAnyoN Dynamics) recently promoted by the European program EUROFLEETS (www.eurofleets.eu), is now essential not only to better understand the sedimentary dynamics and evolutionary mode of submarine canyons,  but also to assess the impact of canyon activity in generating natural hazards and controlling benthic ecosystems stability.

 

Ethics and Geosciences: discovering the International Association for Promoting Geoethics

Ethics and Geosciences: discovering the International Association for Promoting Geoethics

Geoscientists do not have to deal only with technical matters, but have to think also about the ethical implications related to their discipline. To increase the awareness of researchers on the ethical aspects of their activities, it has been created the International Association for Promoting Geoethics (IAPG). To better understand what geoethics and the IAPG are, we interviewed Silvia Peppoloni, founder member and Secretary General of the association. She is researcher at the Italian Institute of Geophysics and Volcanology and her activity covers the fields of geohazards and georisks. She is also elected councillor of the IUGS – International Union of Geological Sciences (2018-2022), member of the Executive Council of the IAEG Italy – International Association of Engineering Geology and the Environment, lecturer to international conferences, editor and author of books and articles,. She has also been awarded with prizes for science communication and natural literature in 2014, 2016 and 2017.

 

Can you clarify what is geoethics?

Geoethics is defined as the “Research and reflection on the values that underpin appropriate behaviours and practices, wherever human activities interact with the Earth system. Geoethics deals with the ethical, social and cultural implications of geoscience knowledge, education, research, practice, and communication, and with the social role and responsibility of geoscientists in conducting their activities”.

This definition includes aspects of general ethics, research integrity, professional ethics, and environmental ethics. It reminds to geoscientists about individual ethical conduct, which is characterized by the awareness of being also a social actor, of possessing a scientific knowledge that can be put to the service of society and employed for a more functional interaction between humans and the Earth system.

[Read More]

Our first Interview is ready!

Our first Interview is ready!

Today we are happy to post our first interview and to thank our first interviewee, Paola Crippa for her contribution. The topic focuses on mortality from high concentration of particulate matter generated from widespread wildfires. This topic wants to be just the starting point to address another and broader theme: dealing with lack-of-data for research purposes in developing countries.

This will be inspired by one of the most recent researches published by Paola: “Population exposure to hazardous air quality due to the 2015 fires in Equatorial Asia” http://www.nature.com/articles/srep37074

Interview

1. Which problem did you address in your research?

Vegetation and peatland fires occur frequently across Equatorial Asia, as they are used to manage the land, clear vegetation and to prepare and maintain land for agriculture. Wildfires emit pollutants that can cause poor regional air quality and are extremely harmful to human health. As a result, each year thousands of premature deaths occur across Equatorial Asia. In fall 2015, these fires burned out of control in Indonesia as a result of the extremely dry landscape caused by strong El Nino conditions. In our study, we use a state-of-the-art air quality model (the Weather Research and Forecasting model with Chemistry, WRF-Chem) at high spatial-temporal resolution to quantify the impact of these fires on air quality and human health. We found that 69 million people were persistently exposed to unhealthy air quality conditions caused by fire emissions and that this pollution may have caused 11,880 (6,153–17,270) excess mortalities. Our results emphasize the need of a coordinated effort between scientists and policymakers to assess the impact of land use changes and human-driven deforestation on fire frequency, to possibly mitigate the impacts of these hazardous events on human lives.

2. Do mortality estimates from simulations actually agree with the corresponding real data?

We evaluated our model simulations relative to both ground- and satellite-based observations of aerosol properties and we are confident that our simulated results provide a realistic representation of the 2015 wildfires, and hence can be used to infer the impact on air quality and human health. We integrated our hourly maps of pollutant concentrations with population density data and estimated the number of people persistently exposed to unhealthy and hazardous air quality conditions during fall 2015 with respect to World Health Organization and Pollutant Standards Index guidelines. While these metrics gave us confidence in our assessment of population exposure, it was unfortunately not possible to validate our mortality estimates since no local hospitalization data were available for the period of interest.

3. No real data were available? This is certainly a strong limitation for the research community but also for those that deal with risk management. What is your position in this regard?

In order to estimate the number of premature deaths occurred as a result of exposure to degraded air quality conditions, epidemiological evidence to link pollutant concentrations and hospitalizations and mortality data are needed. Unfortunately, in Equatorial Asia, as well as in most developing countries, these cohort studies have never been performed, or at least those data are not available to scientists. In our work, we used exposure-response functions developed from studies conducted in Europe and United States where pollutant concentrations are much lower than those registered during the events we studied. Therefore, our mortality estimates are likely conservative. This is indeed a big limitation not only for scientists but also for policymakers when trying to reduce the negative impacts of natural hazards since no robust evidence of the magnitude of those events is available. If local governments would be able to collect, organize and release these data, this would allow scientists to better serve the community by providing better mitigation strategies.

4. Do other countries invest more in data collection allowing for a better coupling between simulations and ground-truth data?

In Europe and United States, epidemiological studies linking exposure to mortality and, most importantly, hospitalization data are easier to access. While still not as easily accessible as most publicly funded satellite and climate model repositories, we hope that Western governments would implement a standardized national or international database that can be used to produce considerably more reliable exposure maps. This would allow a better assessment of mortality in polluted areas such as London, but the level of exposure in less developed countries is on a different level of magnitude, with millions of human lives at risk, including children and elderly citizens. Since any extrapolation of Western data in these areas is problematic, the international community must invest in the development of local studies and data collection.

5. Do you think that simulations like yours can be useful not only in a post-disaster phase but as a risk prevention tool? 

One of the great advantages of using numerical models such as WRF-Chem is that they can be also used in forecasting mode, meaning that they can be used to predict where and how fast pollution would be transported from emission sources and consequently provide information for reducing population exposure. They can be also used to make projections as a function of emission scenarios. This is particularly important in regions subject to rapid land use change and human-driven deforestation, such as Equatorial Asia or South America. An example of successful integration of numerical model forecasts with mitigation strategies can be found in Santiago de Chile, where the government declares alert days based on numerical weather model forecasts of unhealthy pollution. This is the result of a close and constructive collaboration between scientists and policymakers.