Information Discovery from Big Earth Observation Data Archives by Learning from Volunteered Geographic Information (IDEAL-VGI)

  • Moritz Schott
    Moritz Schott
    Heidelberg University, Institute of Geography, GIScience / Geoinformatics Research Group
  • Prof. Dr. Alexander Zipf
    Prof. Dr. Alexander Zipf
    Heidelberg University, Institute of Geography, GIScience / Geoinformatics Research Group
  • Begüm Demir
    Prof. Dr. Begüm Demir
    Technische Universität Berlin, Department of Computer Engineering and Microelectronics, Remote Sensing Image Analysis (RSiM) Group
  • apl Prof. Dr. Sven Lautenbach
    apl Prof. Dr. Sven Lautenbach
    Heidelberg University, Institute of Geography, GIScience / Geoinformatics Research Group
  • Adina Zell
    Adina Zell
  • Gencer Sümbül
    Gencer Sümbül
    Faculty of Electrical Engineering and Computer Science, Remote Sensing Image Analysis (RSiM) Group, Technische Universität Berlin
IDEAL-VGI

IDEAL-VGI

OpenStreetMap (OSM) has evolved to one of the most used geographic databases and is a prototype for volunteered geographic information (VGI). Recently, OSM has become a popular source of labeled data for the remote sensing (RS) community. However, spatial heterogeneous data quality of OSM provides challenges for the training of machine learning models. Frequently, OSM land-use and land-cover (LULC) data has thereby been taken at face value without critical reflection.

The aim of the IDEAL-VGI project was to explore options to better deal with the challenges of using OSM as LULC labels for RS applications. The project has therefore developed tools and knowledge on both sides of this process: the VGI information source as well as the remote sensing community.

OSM as a Data Source of Unknown Quality

Supervised deep learning (DL) methods are found to be effective for Earth observation applications (e.g., multi-label image classification, land-cover map generation etc.) on ever-growing remote sensing (RS) image archives. The success of these methods depends on the availability of a high quantity of annotated training RS images. However, the manual collection of RS image annotations can be time consuming and costly. To overcome this, RS images can be automatically associated with multiple LULC classes (i.e., multi-labels) by using OSM tags. Due to this, large training sets can be created at zero-cost for DL-based multi-label RS image classification methods.

Figure: An illustration of DL-based multi-label RS image classification.

However, OSM tags can be outdated compared to the considered RS images or noisy because of the possible changes on the ground or annotation errors in the OSM database. This can lead to including noisy labels in training data when OSM tags are utilized as the source of training image annotations. In the framework of multi-label RS image classification, label noise is associated with missing labels or wrong labels. A missing label means that although a land use/land cover class exist in an RS image, the corresponding class label is not assigned. When a class label is assigned to RS image while the corresponding class is not present in the image, there is a wrong label.

Training deep neural networks (DNN) on a training set, which includes noisy labels due to OSM tags, may lead to learning sub-optimal model parameters and inaccurate inference performance for RS image classification. Therefore, the fitness for purpose of OSM LULC information for use by the RS community was investigated considering two label types:

  • pixel-based labels using the raw OSM LULC polygons;
  • multi-labels aggregating the available OSM information within squares of 1.2 x 1.2km.


OSM Quality for Pixel-Based Labels

Depending on the quality definition and reference data, around 80% of the analysed, globally-distributed OSM polygons were of very high or perfect quality (Schott & Lautenbach, unpublished). These elements were fit for immediate use by the RS community. These findings support previous findings in the field, yet, users still face insecurity over the data quality at their specific location, timestamp and data topic and the issue of unknown quality therefor remains. To investigate this, a tool was created that will help the (automated) connection between data quality and data attributes: The OSM Element Vectorisation tool (OEV).

OSM Element Vectorisation (OEV) Tool

The tool links over 32 data attributes considering semantic and geometric attributes, mapper and community analyses as well as incorporating external software and services. This large collection of information creates a multidimensional view on the data that extends beyond pure data quality analyses into more generic data mining. It helps the RS community as well as the VGI community to better understand their data and act accordingly. The tool was presented at FOSS4G conference 2022 in Florence (Schott et al., 2022) and the source code is available under an open license. Providing a command line interface as well as an application programming interface and a website (https://oev.geog.uni-heidelberg.de/), the tool enables users from all technical backgrounds to take a detailed look on OSM data.

Video: In a video podcast, Prof. Dr. Sven Lautenbach and Moritz Schott provide an introduction to OSM, OSM data quality and the OSM Element Vectorisation Tool.

Figure: The OSM Element Vectorisation (OEV) frontend website to investigate OSM quality and data attributes of single objects in a user-friendly manner.

OSM Quality for Multi-Labels

For the second use case, an experiment in south-west Germany was implemented. The area is known to have high OSM data quality. In fact, analyses showed that 80% of labels were correct when assigning LULC class labels to small regions based on OSM using a dedicated filter mechanism. But still the remaining data issues can pose problems to deep learning models, much more so in regions where lower OSM data quality can be expected.

Figure: An example of RS image patches extracted from a Sentinel-2 satellite image acquired over south-west Germany including parts of France on June 2021. The manual verification of OSM tags shows that 80% of the all labels were correct when assigning multi-labels to the patches based on the OSM database. For marked areas, green and red boxes represent correct and incorrect annotations, respectively.

OSM as a Source of RS Image Labels for Training Deep Learning Models

To address the limitations of using OSM as the source of training RS image labels, we have developed methods to: 1) first automatically detect noisy OSM tags; and 2) adjust training labels associated with noisy OSM tags for label noise robust learning of the DNN model parameters.

Noisy OSM Tag Detection

Region-based RS image representations including local information and the related spatial organization of LULC classes are important for accurate detection of noisy OSM tags. However, existing DL-based multi-label RS image classification methods are not designed to provide spatial information regarding the class location.

Figure: An illustration of our noisy OSM tag detection method.

We have developed a method that:

  • utilizes an explainable artificial intelligence (XAI) algorithm to obtain visual explanations for the model predictions (i.e., self-enhancement maps).
  • determines prototype vectors for each class and compare new feature vectors to their class prototype.
    • If the prototype and feature vectors are dissimilar to each other, the related class label is considered to be noisy.

Figure: An example of RS image; and its self-enhancement maps obtained on DeepLabV3+ trained under synthetic label noise rates (b) 0%; (c) 10%; (d) 20%; (e) 30%; and (f) 40%.

Training Set Synthetic Label Noise Rate (SLNR) on Test Set
0% 10% 20% 30% 40%
Abundant non-verified data (SLNR = 0%) 80.0 88.5 87.0 89.0 94.0
Abundant non-verified data (SLNR = 20%) 63.0 72.5 79.0 82.5 86.5
Abundant non-verified data (SLNR = 40%) 0.5 30.0 50.5 64.5 71.5
Abundant non-verified data (SLNR = 60%) 0.0 31.0 55.5 65.5 74.0
Abundant non-verified data (SLNR = 80%) 0.0 31.5 55.5 65.0 74.0
Small verified data 45.0 58.5 67.0 78.5 84.0

Table: Label noise detection results in terms of accuracy (%).

References
  1. Zhang X, Wei Y, Yang Y and Wu F, "Rethinking localization map: Towards accurate object perception with self-enhancement maps," arXiv preprint arXiv:200605220, 2020.
  2. K.-H. Lee, X. He, L. Zhang and L. Yang, "CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise," IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5447-5456, 2018.
  3. Chen LC, Zhu Y, Papandreou G, Schroff F and Adam H, "Encoder-decoder with atrous separable convolution for semantic image segmentation" European Conference on Computer Vision, 2018.


Label Noise Robust Multi-Label Image Classification

When a small verified subset of a training set is available, the above-mentioned method can be used to automatically find training images associated with noisy labels. Accordingly, for label-noise robust multi-label image classification, we have divided training procedure into two stages:

  • In the first stage, the model parameters of the considered DNN was learned only on the small verified subset. Once this stage was finalized:
    • We first automatically identified the training images with noisy labels from the rest of training set based on the above-mentioned method.
    • Then we automatically corrected noisy labels based on self-enhancement maps.
  • In the second stage, the considered DNN was fine-tuned on the whole training set with the corrected labels.
SLNR on Training Set Standard Learning Label Noise Robust Learning
0% 99.2 95.6
20% 96.8 89.9
40% 70.9 91.0
60% 66.6 88.3
80% 60.4 87.0

Table: Multi-label image classification results in terms of mean average precision (%).

Closing the Loop

Automatically defining noisy OSM tags can be used to improve OSM data quality in a feedback loop. In this way, the OSM community was given the resources to correct data errors through well-known and established tools like the HOT Tasking manager: https://tm.geog.uni-heidelberg.de/.

Figure: The custom HOT Tasking Manager to feed back deep learning results to the OSM community. The image shows the concise problem description including mapping hints as well as the precise regions of identified data errors.

The complete methods and workflow were presented at the State of the Map conference in Florence 2022 and are equally available under an open license (Schott et al 2022).

Extending Knowledge Across VGIScience Projects

The insights gained into data and community analyses during the OSM data analyses were successfully applied in the context of two other projects within the VGIScience SPP dealing with user-generated content: while a data analyses of Wikidata revealed much potential for integration as well as project specific singularities (Dsouza et al, 2022), data quality analyses were necessary to evaluate the results of a semi-automated data integration workflow combining bird observations from social media with citizen science (Hartmann et al, 2022).

Outlook

The insights gained during the project have led us to new research directions. It became clear that the epistemologies of VGI are a much neglected field of study. Yet, they present the bases for a throrought understanding of data quality as well as for the future of this important data source. Based on our research in this project, follow-up proposals are being prepared.

We have also investigated that label-noise robust learning of DL models can be achieved within a single training procedure by identifying noisy labels during RS image representation learning based on the integration of generative and discriminative reasonings (Sumbul et al, 2023) or importance reweighting of RS images (Sumbul et al, 2023). In addition, we have also studied the effectiveness of the most informative and representative RS image triplets for learning features of RS images with multi-labels (Sumbul et al, 2022).

Figure: An illustration of our GRID approach that jointly leverages the robustness of generative reasoning towards noisy labels and the effectiveness of discriminative reasoning on RS image representation learning (Sumbul et al, 2023).

Former Team Members

  • Tristan Kreuziger (TU Berlin)
  • Michael Schultz (Heidelberg University)
  • Leonie Größchen (Heidelberg University)

Publications

  1. Dsouza, A., Schott, M., & Lautenbach, S. (2022). Comparative Integration Potential Analyses of OSM and Wikidata – The Case Study of Railway Stations. In M. Minghini, P. Liu, H. Li, A. Y. Grinberger, & L. Juhasz (Eds.), Proceedings of the Academic Track at State of the Map 2022. Zenodo. DOI: 10.5281/zenodo.7004483
  2. Schott, M., Zell, A., Lautenbach, S., Demir, B., & Zipf, A. (2022). Returning the favor - Leveraging quality insights of OpenStreetMap-based land-use/land-cover multi- label modeling to the community. In M. Minghini, P. Liu, H. Li, A. Y. Grinberger, & L. Juhasz (Eds.), Proceedings of the Academic Track at State of the Map 2022. Zenodo. DOI: 10.5281/zenodo.7004593
  3. Schott, M., Lautenbach, S., Größchen, L., & Zipf, A. (2022). Openstreetmap Element Vectorisation - A Tool For High Resolution Data Insights And Its Usability In The Land-Use And Land-Cover Domain. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLVIII-4/W1-2022, 395–402. DOI: 10.5194/isprs-archives-XLVIII-4-W1-2022-395-2022
  4. Hartmann, M. C., Schott, M., Dsouza, A., Metz, Y., Volpi, M., & Purves, R. S. (2022). A text and image analysis workflow using citizen science data to extract relevant social media records: Combining red kite observations from Flickr, eBird and iNaturalist. Ecological Informatics, 71, 101782. DOI: https://doi.org/10.1016/j.ecoinf.2022.101782
  5. Sumbul, G., Ravanbakhsh, M., & Demir, B. (2022). Informative and Representative Triplet Selection for Multilabel Remote Sensing Image Retrieval. IEEE Transactions on Geoscience and Remote Sensing, 60, 1–11. DOI: 10.1109/tgrs.2021.3124326
  6. Sumbul, G., & Demir, B. (2023). Importance Reweighting for Label Noise Robust Image Representation Learning in Remote Sensing. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Under Review.
  7. Sumbul, G., & Demir, B. (2023). Generative Reasoning Integrated Label Noise Robust Deep Image Representation Learning in Remote Sensing. ArXiv Preprint ArXiv:2212.01261, Under Review at IEEE Transactions on Image Processing.
  8. Büyüktaş, B., Sumbul, G., & Demir, B. (2023). Learning Across Decentralized Multi-Modal Remote Sensing Archives with Federated Learning. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Under Review.