Under what circumstances do we attribute a mind to AI systems? And, in this case, how do we think their mind works? Answering these questions is crucial to inform the design of safe and trustable AI, to inform research on the ethical, social and legal issues raised by the increasing presence of AI systems in everyday life and to investigate how they can be used as tools to study human and social cognition. This work proposes a philosophical reflection on the possible structure of people’s mental models of AI systems. We distinguish between two possible styles of modeling that people may adopt in everyday contexts. Both involve the attribution of mental states and cognitive abilities to the AI system, even though they differ from one another in some relevant aspects. One modeling style is akin to folk psychology and relies on the attribution of beliefs, desires, and other propositional attitudes to the system. The other, which we will refer to as folk-cognitivist, is more akin to the account of the structure of the mind that characterizes classical cognitive science. These modeling styles correspond to different classes of mentalistic stances that people may adopt when they interact with AI systems in ordinary contexts.

Larghi, S., Datteri, E. (2024). Mentalistic Stances Towards AI Systems: Beyond the Intentional Stance. In A. Aldini (a cura di), Software Engineering and Formal Methods. SEFM 2023 Collocated Workshops CIFMA 2023 and OpenCERT 2023, Eindhoven, The Netherlands, November 6–10, 2023, Revised Selected Papers (pp. 28-41). Springer [10.1007/978-3-031-66021-4_2].

Mentalistic Stances Towards AI Systems: Beyond the Intentional Stance

Larghi, Silvia
;
Datteri, Edoardo
2024

Abstract

Under what circumstances do we attribute a mind to AI systems? And, in this case, how do we think their mind works? Answering these questions is crucial to inform the design of safe and trustable AI, to inform research on the ethical, social and legal issues raised by the increasing presence of AI systems in everyday life and to investigate how they can be used as tools to study human and social cognition. This work proposes a philosophical reflection on the possible structure of people’s mental models of AI systems. We distinguish between two possible styles of modeling that people may adopt in everyday contexts. Both involve the attribution of mental states and cognitive abilities to the AI system, even though they differ from one another in some relevant aspects. One modeling style is akin to folk psychology and relies on the attribution of beliefs, desires, and other propositional attitudes to the system. The other, which we will refer to as folk-cognitivist, is more akin to the account of the structure of the mind that characterizes classical cognitive science. These modeling styles correspond to different classes of mentalistic stances that people may adopt when they interact with AI systems in ordinary contexts.
Capitolo o saggio
Philosophy of artificial intelligence, Philosophy of cognitive science, Human-AI interaction, Mental state attribution
English
Software Engineering and Formal Methods. SEFM 2023 Collocated Workshops CIFMA 2023 and OpenCERT 2023, Eindhoven, The Netherlands, November 6–10, 2023, Revised Selected Papers
Aldini, A
3-ago-2024
2024
9783031660207
Springer
28
41
Larghi, S., Datteri, E. (2024). Mentalistic Stances Towards AI Systems: Beyond the Intentional Stance. In A. Aldini (a cura di), Software Engineering and Formal Methods. SEFM 2023 Collocated Workshops CIFMA 2023 and OpenCERT 2023, Eindhoven, The Netherlands, November 6–10, 2023, Revised Selected Papers (pp. 28-41). Springer [10.1007/978-3-031-66021-4_2].
reserved
File in questo prodotto:
File Dimensione Formato  
Larghi-Datteri-2024-SEFM 2023-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 330.34 kB
Formato Adobe PDF
330.34 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/504939
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact