A recent development in Psychology of Thought is about decisional processes in hybrid teams, composed by a human agent and an artificial agent. Interaction in hybrid teams can be conceptualized as cooperation: support systems are not developed for substitution but for collaboration. The human cognitive system manifests several blind spots, limits that could make performance in a task not optimal. A cognitive analysis of hybrid teams in medical context can show the impact of these limits and their management. Some of these limits, of perceptual and attentional nature, can be partially compensated by using information provided by an artificial agent as support of the decision process. Other limits manifest themselves in the effective use of information provided. The fundamental problem is calibration: correct weighting of human opinion and of AI opinion, and then to rationally integrate both. The calibration process might be systematically distorted by cognitive biases and by lacking a proper understanding of the AI’s way of contributing to the team, leading to suboptimal performance. Development of a theory of the artificial mind allows the human decision maker to represent the correct functioning of the AI, of the human limits that AI is meant to address, and of the human strengths on which concentration of resources would be better, facilitating calibration. A rationally calibrated interaction will provide information allowing improved decisions and diagnoses.

Introzzi, L., Cherubini, P., Reverberi, C. (2024). Human-AI interaction as cooperation: Towards a theory of artificial mind. SISTEMI INTELLIGENTI, 36(2), 499-514 [10.1422/113333].

Human-AI interaction as cooperation: Towards a theory of artificial mind

Introzzi L.
Primo
;
Cherubini P.;Reverberi C.
Ultimo
2024

Abstract

A recent development in Psychology of Thought is about decisional processes in hybrid teams, composed by a human agent and an artificial agent. Interaction in hybrid teams can be conceptualized as cooperation: support systems are not developed for substitution but for collaboration. The human cognitive system manifests several blind spots, limits that could make performance in a task not optimal. A cognitive analysis of hybrid teams in medical context can show the impact of these limits and their management. Some of these limits, of perceptual and attentional nature, can be partially compensated by using information provided by an artificial agent as support of the decision process. Other limits manifest themselves in the effective use of information provided. The fundamental problem is calibration: correct weighting of human opinion and of AI opinion, and then to rationally integrate both. The calibration process might be systematically distorted by cognitive biases and by lacking a proper understanding of the AI’s way of contributing to the team, leading to suboptimal performance. Development of a theory of the artificial mind allows the human decision maker to represent the correct functioning of the AI, of the human limits that AI is meant to address, and of the human strengths on which concentration of resources would be better, facilitating calibration. A rationally calibrated interaction will provide information allowing improved decisions and diagnoses.
Articolo in rivista - Articolo scientifico
artificial intelligence; cognitive bias; human-AI interaction; hybrid intelligence; hybrid teams; theory of artificial mind;
Italian
2024
36
2
499
514
none
Introzzi, L., Cherubini, P., Reverberi, C. (2024). Human-AI interaction as cooperation: Towards a theory of artificial mind. SISTEMI INTELLIGENTI, 36(2), 499-514 [10.1422/113333].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/527302
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact