A semantic segmentation-guided approach for ground-to-aerial image matching

Published in IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, 2024

Nowadays the accurate geo-localization of ground-view images has an important role across domains as diverse as journalism, forensics analysis, transports, and Earth Observation. This work addresses the problem of matching a query ground-view image with the corresponding satellite image without GPS data. This is done by comparing the features from a ground-view image and a satellite one, innovatively leveraging the corresponding latter’s segmentation mask through a three-stream Siamese-like network. The proposed method, Semantic Align Net (SAN), focuses on limited Field-of-View (FoV) and ground panorama images (images with a FoV of 360°). The novelty lies in the fusion of satellite images in combination with their semantic segmentation masks, aimed at ensuring that the model can extract useful features and focus on the significant parts of the images. This work shows how SAN through semantic analysis of images improves the performance on the unlabelled CVUSA dataset for all the tested FoVs.

Text citation:

F. Pro, N. Dionelis, L. Maiano, B. L. Saux and I. Amerini, "A Semantic Segmentation-Guided Approach for Ground-to-Aerial Image Matching," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 2630-2635

Bibtex citation:

@INPROCEEDINGS{10642526,
author={Pro, Francesco and Dionelis, Nikolaos and Maiano, Luca and Saux, Bertrand Le and Amerini, Irene},
booktitle={IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium},
title={A Semantic Segmentation-Guided Approach for Ground-to-Aerial Image Matching},
year={2024},
volume={},
number={},
pages={2630-2635},
keywords={Accuracy;Satellites;Image matching;Semantic segmentation;Semantics;Streaming media;Feature extraction;Earth Observation data;Ground-to-aerial image matching;Semantic segmentation;Data fusion},
doi={10.1109/IGARSS53475.2024.10642526}}

Download Paper