European Historical Flood Disaster Database

This service will provide situation information for crisis managers by aggregating both flooding-related open data and summarizing the specialised content on a custom map.

The following presentation on this service was given at the MELODIES: Exploiting Open Data conference in October 2016. Slides are available here.

Service Q&A

Who are the target users of this service?

The general public, local administrators dealing with emergencies, urban planners, policy support, and the insurance industry covering flood related losses.

How will these users access this service?

The service will be available online.

What products does this service provide?

The European Historical Flood Disaster Database will bring together consistent and accurate geospatial information relating to flood events that affect the population from across the continent. Users will be able to find imagery, maps and other information relevant to specific flood disaster events from the last 20 years. Arising from these products, users are provided with important records regarding flood damage, flood extent and affected regions.

How will these products benefit users?

In Europe, Directive 2007/60/EC requires Member States to assess all water courses and coast lines to see if they are at risk from flooding, to map the flood extent as well as the assets and humans at risk in these areas. Authorities are also required to take adequate and coordinated measures to reduce any flood risk.

Providing users with alternative, consistent, and validated information allows them to better understand flood related risks and identify the hazards. Sharing this information publicly also improves awareness.

Which Open Data sources drive this service?

  1. Sentinel-1 satellite imagery
  2. Copernicus Emergency Management Service
  3. European Flood Awareness System (EFAS)
  4. Twitter
  5. Dartmouth Flood Observatory
  6. EM-DAT
  7. The Digital Elevation Model over Europe

What processing is performed on this data?

Due to the disparity in the quality of data available to us, processing workflows are not yet standardised and quality analysis and control have taken center stage in the development of this service. We have developed standards that will be integrated in with the processing in order to quantitatively measure quality, especially when location accuracy is being measured.

How does this service use Linked Open Data?

At the moment, the service will produce Linked Open Data. As the service matures and Linked Open Data sources relevant to the European Historical Flood Disaster Database are discovered, we expect to integrate Linked Open data sources into the service. Additionally, the quality of the Linked Open Data will need to be ascertained before it can be used within the service.

It is expected that Linked Open Data from national agencies and regional governments will be useful in providing added value to the service, particularly in order to allow users to make geospatial queries. It is impossible to know a priori all the possible queries that users may potentially make using the information from our service, therefore providing the capability to explore flood related scenarios and linkages to other Linked Open Data sets will be a benefit to users.

How Open Data has improved this service

It would be difficult to imagine the European Historical Flood Disaster Database service without access to Open Data, especially a service providing European citizens with this type of information. The Sentinel-1 Synthetic Aperture Radar (SAR) remotely sensed imagery provides a through-the-clouds snapshot of flood disasters. Open disaster databases provide information about the name of the cities and towns affected as well as the dates that the flood disaster occurred. Citizens sharing their flood event perspectives in real time via social media produce brand new data for documenting a flood event.

This Open Data, together with the services and infrastructure that make it accessible online are the catalysts for this service. Data and accessibility costs were far to great in the past to be able to build such a service. Furthermore, licensing issues related to remotely sensed imagery in many cases made it almost impossible to create such a downstream service.

Our biggest challenges so far...

... have been in discovering and cleaning the data in order to turn it into actionable information with the quality and standards expected by users. The unexpected variability in the quality of the data has been a challenge for us. It has, however, also helped us to better understand the value of the information being shared. While there is no single solution to this problem, a baseline has been set from which new data can be more easily compared and decisions about its value made more efficiently.