Recent works have shown that, when trained at scale, unimodal 2D vision and text encoders converge to learned features that share remarkable structural properties, despite arising from different representations. However, the role of 3D encoders with respect to other modalities remains unexplored. Furthermore, existing 3D foundation models that leverage large datasets are typically trained with explicit alignment objectives with respect to frozen encoders from other representations. In this work, we investigate the possibility of a posteriori alignment of representations obtained from uni-modal 3D encoders compared to text-based feature spaces. We show that naive post-training feature alignment of uni-modal text and 3D encoders results in limited performance. We then focus on extracting subspaces of the corresponding feature spaces and discover that by projecting learned representations onto well-chosen lower-dimensional subspaces the quality of alignment becomes significantly higher, leading to improved accuracy on matching and retrieval tasks. Our analysis further sheds light on the nature of these shared subspaces, which roughly separate between semantic and geometric data representations. Overall, ours is the first work that helps to establish a baseline for post-training alignment of 3D uni-modal and text feature spaces, and helps to highlight both the shared and unique properties of 3D data compared to other representations.

Hadgi, S., Moschella, L., Santilli, A., Gomez, D., Huang, Q., Rodola, E., et al. (2025). Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces. In 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp.19825-19835). IEEE Computer Society [10.1109/CVPR52734.2025.01846].

Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces

Melzi S.;
2025

Abstract

Recent works have shown that, when trained at scale, unimodal 2D vision and text encoders converge to learned features that share remarkable structural properties, despite arising from different representations. However, the role of 3D encoders with respect to other modalities remains unexplored. Furthermore, existing 3D foundation models that leverage large datasets are typically trained with explicit alignment objectives with respect to frozen encoders from other representations. In this work, we investigate the possibility of a posteriori alignment of representations obtained from uni-modal 3D encoders compared to text-based feature spaces. We show that naive post-training feature alignment of uni-modal text and 3D encoders results in limited performance. We then focus on extracting subspaces of the corresponding feature spaces and discover that by projecting learned representations onto well-chosen lower-dimensional subspaces the quality of alignment becomes significantly higher, leading to improved accuracy on matching and retrieval tasks. Our analysis further sheds light on the nature of these shared subspaces, which roughly separate between semantic and geometric data representations. Overall, ours is the first work that helps to establish a baseline for post-training alignment of 3D uni-modal and text feature spaces, and helps to highlight both the shared and unique properties of 3D data compared to other representations.
paper
learning; multi-modal; nlp; point cloud; representation; text; uni-modal;
English
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2025 - 10-17 June 2025
2025
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
979-8-3315-4364-8
2025
19825
19835
open
Hadgi, S., Moschella, L., Santilli, A., Gomez, D., Huang, Q., Rodola, E., et al. (2025). Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces. In 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp.19825-19835). IEEE Computer Society [10.1109/CVPR52734.2025.01846].
File in questo prodotto:
File Dimensione Formato  
Hadgi-2025-CVPR-preprint.pdf

accesso aperto

Tipologia di allegato: Submitted Version (Pre-print)
Licenza: Altro
Dimensione 1.88 MB
Formato Adobe PDF
1.88 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/590823
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact