Global localization is an important and widely studied problem for many robotic applications. Place recognition approaches can be exploited to solve this task, e.g., in the autonomous driving field. While most vision-based approaches match an image w.r.t. an image database, global visual localization within LiDAR-maps remains fairly unexplored, even though the path toward high definition 3D maps, produced mainly from LiDARs, is clear. In this work we leverage Deep Neural Network (DNN) approaches to create a shared embedding space between images and LiDAR-maps, allowing for image to 3D-LiDAR place recognition. We trained a 2D and a 3D DNN that create embeddings, respectively from images and from point clouds, that are close to each other whether they refer to the same place. An extensive experimental activity is presented to assess the effectiveness of the approach w.r.t. different learning paradigms, network architectures, and loss functions. All the evaluations have been performed using the Oxford Robotcar Dataset, which encompasses a wide range of weather and light conditions.

Cattaneo, D., Vaghi, M., Fontana, S., Ballardini, A., Sorrenti, D. (2020). Global visual localization in LiDAR-maps through shared 2D-3D embedding space. In Proceedings - IEEE International Conference on Robotics and Automation (pp.4365-4371). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICRA40945.2020.9196859].

Global visual localization in LiDAR-maps through shared 2D-3D embedding space

Cattaneo D.
;
Vaghi M.;Fontana S.;Ballardini A. L.;Sorrenti D. G.
2020

Abstract

Global localization is an important and widely studied problem for many robotic applications. Place recognition approaches can be exploited to solve this task, e.g., in the autonomous driving field. While most vision-based approaches match an image w.r.t. an image database, global visual localization within LiDAR-maps remains fairly unexplored, even though the path toward high definition 3D maps, produced mainly from LiDARs, is clear. In this work we leverage Deep Neural Network (DNN) approaches to create a shared embedding space between images and LiDAR-maps, allowing for image to 3D-LiDAR place recognition. We trained a 2D and a 3D DNN that create embeddings, respectively from images and from point clouds, that are close to each other whether they refer to the same place. An extensive experimental activity is presented to assess the effectiveness of the approach w.r.t. different learning paradigms, network architectures, and loss functions. All the evaluations have been performed using the Oxford Robotcar Dataset, which encompasses a wide range of weather and light conditions.
slide + paper
global visual localization, place recognition, autonomous driving, high definition 3D maps, deep neural network, shared embedding space, 3D place recognition, 3D DNN
English
2020 IEEE International Conference on Robotics and Automation, ICRA 2020 - 31 May 2020 - 31 August 2020
2020
Proceedings - IEEE International Conference on Robotics and Automation
9781728173955
2020
4365
4371
9196859
reserved
Cattaneo, D., Vaghi, M., Fontana, S., Ballardini, A., Sorrenti, D. (2020). Global visual localization in LiDAR-maps through shared 2D-3D embedding space. In Proceedings - IEEE International Conference on Robotics and Automation (pp.4365-4371). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICRA40945.2020.9196859].
File in questo prodotto:
File Dimensione Formato  
Cattaneo-2020-IEEE Int Conf Rob Automat-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 2.62 MB
Formato Adobe PDF
2.62 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/328581
Citazioni
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 32
Social impact