We present an approach to perform 3D pose estimation of multiple people from a few calibrated camera views. Our architecture, leveraging the recently proposed unprojection layer, aggregates feature-maps from a 2D pose estimator backbone into a comprehensive representation of the 3D scene. Such intermediate representation is then elaborated by a fully-convolutional volumetric network and a decoding stage to extract 3D skeletons with sub-voxel accuracy. Our method achieves state of the art MPJPE on the CMU Panoptic dataset using a few unseen views and obtains competitive results even with a single input view. We also assess the transfer learning capabilities of the model by testing it against the publicly available Shelf dataset obtaining good performance metrics. The proposed method is inherently efficient: as a pure bottom-up approach, it is computationally independent of the number of people in the scene. Furthermore, even though the computational burden of the 2D part scales linearly with the number of input views, the overall architecture is able to exploit a very lightweight 2D backbone which is orders of magnitude faster than the volumetric counterpart, resulting in fast inference time. The system can run at 6 FPS, processing up to 10 camera views on a single 1080Ti GPU.

Elmi, A., Mazzini, D., Tortella, P. (2020). Light3Dpose: Real-time multi-person 3D pose estimation from multiple views. In 25th International Conference on Pattern Recognition, ICPR 2020 (pp.2755-2762). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICPR48806.2021.9412652].

Light3Dpose: Real-time multi-person 3D pose estimation from multiple views

Elmi A.;Mazzini D.;
2020

Abstract

We present an approach to perform 3D pose estimation of multiple people from a few calibrated camera views. Our architecture, leveraging the recently proposed unprojection layer, aggregates feature-maps from a 2D pose estimator backbone into a comprehensive representation of the 3D scene. Such intermediate representation is then elaborated by a fully-convolutional volumetric network and a decoding stage to extract 3D skeletons with sub-voxel accuracy. Our method achieves state of the art MPJPE on the CMU Panoptic dataset using a few unseen views and obtains competitive results even with a single input view. We also assess the transfer learning capabilities of the model by testing it against the publicly available Shelf dataset obtaining good performance metrics. The proposed method is inherently efficient: as a pure bottom-up approach, it is computationally independent of the number of people in the scene. Furthermore, even though the computational burden of the 2D part scales linearly with the number of input views, the overall architecture is able to exploit a very lightweight 2D backbone which is orders of magnitude faster than the volumetric counterpart, resulting in fast inference time. The system can run at 6 FPS, processing up to 10 camera views on a single 1080Ti GPU.
paper
Cameras; Convolutional neural networks; Network architecture; Statistical tests; Transfer learning
English
25th International Conference on Pattern Recognition, ICPR 2020 - 10 January 2021 through 15 January 2021
2021
25th International Conference on Pattern Recognition, ICPR 2020
9781728188089
2020
2755
2762
9412652
none
Elmi, A., Mazzini, D., Tortella, P. (2020). Light3Dpose: Real-time multi-person 3D pose estimation from multiple views. In 25th International Conference on Pattern Recognition, ICPR 2020 (pp.2755-2762). Institute of Electrical and Electronics Engineers Inc. [10.1109/ICPR48806.2021.9412652].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/532081
Citazioni
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 5
Social impact