In his reflections on the ethical monitoring of technological research programmes, Tamburrini discussed the risk of deceiving people into believing that their robot companions possess cognitive and emotional capabilities that they do not have. This can result in dangerous forms of over-delegation and emotional attachment. Inspired by his remarks, this article reflects on the concept of 'deception' in human–robot interaction. After arguing that the concept is difficult to define, the article proposes the following intuitive working definition: deception arises from a mismatch between ordinary and scientific mental models of robots at a particular level of analysis. This raises the question of how people’s mental models of robots can be studied scientifically. Following a brief overview of the main philosophical, theoretical, and methodological approaches, an exploratory study is presented that examines individuals’ understanding of the behaviour of a non-social, non-humanoid robot. Using an adaptation of a standard psychological test, the study provisionally supports the claim that people easily attribute mental states, such as false beliefs, to this type of robot even in the absence of any deceptive narrative or complicated form of interaction. However, the experimental results obtained in this study and many others in this growing research field require careful scrutiny, necessitating strong collaboration between roboticists, psychologists and philosophers of science and mind.

Ciardo, F., Bolis, M., Lapomarda, L., Roselli, C., Datteri, E. (2026). Are social robots inherently deceptive?. SISTEMI INTELLIGENTI(1/2026, aprile), 203-225 [10.1422/120420].

Are social robots inherently deceptive?

Ciardo, F;Lapomarda, L;Roselli, C;Datteri, E
2026

Abstract

In his reflections on the ethical monitoring of technological research programmes, Tamburrini discussed the risk of deceiving people into believing that their robot companions possess cognitive and emotional capabilities that they do not have. This can result in dangerous forms of over-delegation and emotional attachment. Inspired by his remarks, this article reflects on the concept of 'deception' in human–robot interaction. After arguing that the concept is difficult to define, the article proposes the following intuitive working definition: deception arises from a mismatch between ordinary and scientific mental models of robots at a particular level of analysis. This raises the question of how people’s mental models of robots can be studied scientifically. Following a brief overview of the main philosophical, theoretical, and methodological approaches, an exploratory study is presented that examines individuals’ understanding of the behaviour of a non-social, non-humanoid robot. Using an adaptation of a standard psychological test, the study provisionally supports the claim that people easily attribute mental states, such as false beliefs, to this type of robot even in the absence of any deceptive narrative or complicated form of interaction. However, the experimental results obtained in this study and many others in this growing research field require careful scrutiny, necessitating strong collaboration between roboticists, psychologists and philosophers of science and mind.
Articolo in rivista - Articolo scientifico
Human-robot interaction; Attribution of mental states to robots; Deception; Ethics of robotics
English
2026
1/2026, aprile
203
225
none
Ciardo, F., Bolis, M., Lapomarda, L., Roselli, C., Datteri, E. (2026). Are social robots inherently deceptive?. SISTEMI INTELLIGENTI(1/2026, aprile), 203-225 [10.1422/120420].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/603081
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact