Large Language Model (LLM) based chatbots are increasingly prominent in the digital landscape, promising users a range of experiences, everything from human-like conversation to intimate companionship. But where exactly is the human in such technology? And what is its impact on the humanity of those persons engaging with it? Taking as its focus the popular LLM-based chatbot Replika, this article contends that - positive use cases notwithstanding - emerging generative AI technology carries with it a novel and more subtle form of dehumanisation. We combine an autoethnographic investigation with a multi-disciplinary critique (human-computer interaction; moral philosophy; law) to argue that such technology risks dehumanisation by inviting, demanding or conditioning human emotional and attentional engagement in circumstances where reciprocity is structurally impossible. We argue further that existing conceptual, philosophical and regulatory systems will need to be adapted to help diminish or erase the risks and harms presented by these emerging technologies.

I chatbot basati su modelli linguistici di grandi dimensioni (LLM) occupano un posto sempre più rilevante nel panorama digitale, promettendo agli utenti una gamma di esperienze che spaziano dalla conversazione di tipo umano alla compagnia intima. Ma dove si colloca esattamente l'umano in una tale tecnologia? E quale impatto ha sull'umanità delle persone che vi interagiscono? Prendendo come oggetto di analisi il popolare chatbot basato su LLM Replika, questo articolo sostiene che — a prescindere dai casi d'uso positivi — la tecnologia generativa emergente porta con sé una forma di disumanizzazione nuova e più sottile. Combiniamo un'indagine autoetnografica con una critica multidisciplinare (interazione uomo-macchina; filosofia morale; diritto) per argomentare che tale tecnologia rischia di produrre disumanizzazione invitando, imponendo o condizionando il coinvolgimento emotivo e attentivo degli esseri umani in circostanze nelle quali la reciprocità è strutturalmente impossibile. Sosteniamo inoltre che i sistemi concettuali, filosofici e regolativi esistenti dovranno essere adattati per contribuire a ridurre o eliminare i rischi e i danni presentati da queste tecnologie emergenti.

Monti, P., Grünloh, C., Worthington, M. (2026). Missing the human in AI: on dehumanisation by generative AI chatbots using the case of Replika. ETHICS AND INFORMATION TECHNOLOGY, 28(2), 1-17 [10.1007/s10676-026-09898-x].

Missing the human in AI: on dehumanisation by generative AI chatbots using the case of Replika

Monti, Paolo
;
2026

Abstract

Large Language Model (LLM) based chatbots are increasingly prominent in the digital landscape, promising users a range of experiences, everything from human-like conversation to intimate companionship. But where exactly is the human in such technology? And what is its impact on the humanity of those persons engaging with it? Taking as its focus the popular LLM-based chatbot Replika, this article contends that - positive use cases notwithstanding - emerging generative AI technology carries with it a novel and more subtle form of dehumanisation. We combine an autoethnographic investigation with a multi-disciplinary critique (human-computer interaction; moral philosophy; law) to argue that such technology risks dehumanisation by inviting, demanding or conditioning human emotional and attentional engagement in circumstances where reciprocity is structurally impossible. We argue further that existing conceptual, philosophical and regulatory systems will need to be adapted to help diminish or erase the risks and harms presented by these emerging technologies.
Articolo in rivista - Articolo scientifico
Disumanizzazione; Large Language Models; Technological Frames; Replika; Regolamentazione; Antropomorfismo; Etica della Cura; Reciprocità
Dehumanisation; Large Language Models; Technological Frames; Replika; Regulation; Anthropomorphism; Ethics of Care; Reciprocity
English
20-apr-2026
2026
28
2
1
17
27
open
Monti, P., Grünloh, C., Worthington, M. (2026). Missing the human in AI: on dehumanisation by generative AI chatbots using the case of Replika. ETHICS AND INFORMATION TECHNOLOGY, 28(2), 1-17 [10.1007/s10676-026-09898-x].
File in questo prodotto:
File Dimensione Formato  
Monti et al-2026-Ethics Inf Technol-VoR.pdf

accesso aperto

Descrizione: Articolo "Missing the human in AI: on dehumanisation by generative AI chatbots using the case of Replika"
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/602181
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact