While in traditional methods object detection is based on the handcrafted definition of relevant visual features and rules, in machine/deep learning methods this task is achieved by learning both features and rules from a training set. The traditional and machine/deep learning object detection workflows are often described as opposite because in the traditional framework, the visual features and rules to detect the object of interest are provided as input, while in the machine/deep learning-based framework they are automatically learned from the data depending on the task considered and constitute the final trained model. In this work, we analyze the object detection recipe, and we show that these two approaches actually present three common issues that require human supervision and ad hoc procedures to be addressed: the design of an object model suitable for the context, devices, and task at hand; the achievement of detection robustness against several factors like noise, image quality, changes in geometry, and light variations; and the definition of an appropriate matching function. We also briefly review some common metrics for evaluating object detection performance, proving that human intervention is crucial in this task as well. Our analysis aims at fostering a more aware use of the object detection approaches and stimulating new research for automating-where possible-the tasks that humans are still in charge of.

Lecca, M., Bianco, S. (2025). Common issues and human intervention in object detection from handcrafted features to deep learning: discussion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION, 42(12), 1977-1987 [10.1364/JOSAA.569685].

Common issues and human intervention in object detection from handcrafted features to deep learning: discussion

Bianco S.
2025

Abstract

While in traditional methods object detection is based on the handcrafted definition of relevant visual features and rules, in machine/deep learning methods this task is achieved by learning both features and rules from a training set. The traditional and machine/deep learning object detection workflows are often described as opposite because in the traditional framework, the visual features and rules to detect the object of interest are provided as input, while in the machine/deep learning-based framework they are automatically learned from the data depending on the task considered and constitute the final trained model. In this work, we analyze the object detection recipe, and we show that these two approaches actually present three common issues that require human supervision and ad hoc procedures to be addressed: the design of an object model suitable for the context, devices, and task at hand; the achievement of detection robustness against several factors like noise, image quality, changes in geometry, and light variations; and the definition of an appropriate matching function. We also briefly review some common metrics for evaluating object detection performance, proving that human intervention is crucial in this task as well. Our analysis aims at fostering a more aware use of the object detection approaches and stimulating new research for automating-where possible-the tasks that humans are still in charge of.
Articolo in rivista - Articolo scientifico
Object detection, handcrafted features, deep learning
English
20-nov-2025
2025
42
12
1977
1987
reserved
Lecca, M., Bianco, S. (2025). Common issues and human intervention in object detection from handcrafted features to deep learning: discussion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION, 42(12), 1977-1987 [10.1364/JOSAA.569685].
File in questo prodotto:
File Dimensione Formato  
Lecca-2025-J Optical Soc Am A-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 990.35 kB
Formato Adobe PDF
990.35 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/583281
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact