Application-oriented research sometimes requires going a step back and focusing on basic research first. This often buries the potential to create an impact on a wider range of applications and research areas beyond VGIscience.
An example for this is a recent publication by the ENAP project group of VGIscience about semantic image retrieval. As opposed to keyword-based image search, Content-based Image Retrieval methods search for images on the internet that are similar to a query image provided by the user. This mechanism is, for instance, known from Google’s “reverse image search”. These traditional approaches, however, focus on the visual similarity of images and ignore semantic aspects. For example, Google was subject to public criticism in 2015 because the Google Photos app mistakenly confused black people with gorillas.
Up to now, most machine learning methods try to learn similarity based purely on the image information. However, the semantic relation that exists between a caterpillar and a butterfly, for example, is impossible to be learned from images only. To overcome this semantic gap, the computer scientists from the Friedrich Schiller University Jena have developed a method for integrating prior human knowledge about the semantic similarity of object classes into deep learning. These similarities are derived from taxonomies encoding relationships such as “a gorilla is an ape is a primate is a mammal is an animal is a living thing”.
Their work improved the semantic consistency of content-based image retrieval results substantially and has been awarded with the Best Paper Award at the IEEE Winter Conference on Applications of Computer Vision (WACV), which took place in January 2019 on Hawaii.
The first author of the award-winning paper, Björn Barz, recently spoke to the German radio Deutschlandfunk in an interview to explain the method in more detail. The interview can be listened to in German here:
In the future, the VGIscience researchers are going to apply their new methods from this basic research to the application of finding relevant images of flood events shared on social media platforms.
Björn Barz, Joachim Denzler (2019):
Hierarchy-based Image Embeddings for Semantic Image Retrieval.
IEEE Winter Conference on Applications of Computer Vision (WACV) 2019, pp. 638-647, doi: 10.1109/WACV.2019.00073.