Generative artificial intelligence increasingly produces realistic video, raising questions about how people emotionally and socially respond to AI outputs. We extended prior work on text, image, and audio generation by focusing on AI-generated video and on affective and intergroup mechanisms linked to intentions to use AI. In a 2(source: AI vs. human) × 2(quality: flawless vs. error-containing) between-subjects design, participants (N = 138) watched a short video and reported negative emotions, identity and realistic threat, intergroup anxiety, and intentions to use AI tools. We also explored whether the presence of AI errors (hallucinations) altered affective reactions. Source attribution (AI vs. human), but not output quality or their interaction, increased negative emotions, indicating that discomfort was driven more by the AI label than by the presence of hallucinations. We then estimated a staged mediation model with source as predictor, negative emotions as a first-stage mediator, and threat perceptions and intergroup anxiety entered in parallel as downstream mediators. The only reliable indirect effect linked AI source to lower intentions via negative emotions and intergroup anxiety. These findings highlight the relevance of intergroup frameworks for understanding public responses to generative video and for guiding human–AI interface design. Adoption, trust, and communication implications are discussed.
Gabbiadini, A., Manfredi, A., Serrao, F., Puzella, G. (2026). When the Source Is Artificial: Negative Emotions, Intergroup Anxiety, and Threat Responses to AI-Generated Video Content. JOURNAL OF COMMUNITY & APPLIED SOCIAL PSYCHOLOGY, 36(3), 1-15 [10.1002/casp.70266].
When the Source Is Artificial: Negative Emotions, Intergroup Anxiety, and Threat Responses to AI-Generated Video Content
Gabbiadini Alessandro
;Manfredi Anna;Serrao Fabrizio;Puzella Giulio
2026
Abstract
Generative artificial intelligence increasingly produces realistic video, raising questions about how people emotionally and socially respond to AI outputs. We extended prior work on text, image, and audio generation by focusing on AI-generated video and on affective and intergroup mechanisms linked to intentions to use AI. In a 2(source: AI vs. human) × 2(quality: flawless vs. error-containing) between-subjects design, participants (N = 138) watched a short video and reported negative emotions, identity and realistic threat, intergroup anxiety, and intentions to use AI tools. We also explored whether the presence of AI errors (hallucinations) altered affective reactions. Source attribution (AI vs. human), but not output quality or their interaction, increased negative emotions, indicating that discomfort was driven more by the AI label than by the presence of hallucinations. We then estimated a staged mediation model with source as predictor, negative emotions as a first-stage mediator, and threat perceptions and intergroup anxiety entered in parallel as downstream mediators. The only reliable indirect effect linked AI source to lower intentions via negative emotions and intergroup anxiety. These findings highlight the relevance of intergroup frameworks for understanding public responses to generative video and for guiding human–AI interface design. Adoption, trust, and communication implications are discussed.| File | Dimensione | Formato | |
|---|---|---|---|
|
Alessandro et al-2026-Community Applied Soc Psy-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
415.33 kB
Formato
Adobe PDF
|
415.33 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


