The increasing integration of artificial intelligence (AI) in decision-making processes has amplified discussions surrounding algorithmic authority—the perceived epistemic legitimacy of AI systems over human judgment. This study investigates how individuals attribute epistemic authority to AI, focusing on psychological, contextual, and sociotechnical factors. Existing research highlights the importance of trust in automation, perceived performance, and moral frameworks in shaping such attributions. Unlike prior conceptual or philosophical accounts of algorithmic authority, our study adopts a relational and empirically grounded perspective by operationalizing algority through psychometric measures and contextual assessments. To address knowledge gaps in the micro-level dynamics of this phenomenon, we conducted an empirical study using psychometric tools and scenario-based assessments. Here, we report key findings from a survey of 610 participants, revealing significant correlations between trust in automation (TiA), perceptions of automated performance (PAS), and the propensity to defer to AI, particularly in high-stakes scenarios like criminal justice and job-matching. Trust in automation emerged as a primary factor, while moral attitudes moderated deference in ethically sensitive contexts. Our findings highlight the practical relevance of transparency and explainability for supporting critical engagement with AI outputs and for informing the design of contextually appropriate decision support. This study contributes to understanding algorithmic authority as a multidimensional construct, offering empirically grounded insights for designing AI systems that are trustworthy and context-sensitive.

Milella, F., Cabitza, F. (2026). Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 8(2), 1-22 [10.3390/make8020036].

Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI

Milella, F
Primo
;
Cabitza F
Secondo
2026

Abstract

The increasing integration of artificial intelligence (AI) in decision-making processes has amplified discussions surrounding algorithmic authority—the perceived epistemic legitimacy of AI systems over human judgment. This study investigates how individuals attribute epistemic authority to AI, focusing on psychological, contextual, and sociotechnical factors. Existing research highlights the importance of trust in automation, perceived performance, and moral frameworks in shaping such attributions. Unlike prior conceptual or philosophical accounts of algorithmic authority, our study adopts a relational and empirically grounded perspective by operationalizing algority through psychometric measures and contextual assessments. To address knowledge gaps in the micro-level dynamics of this phenomenon, we conducted an empirical study using psychometric tools and scenario-based assessments. Here, we report key findings from a survey of 610 participants, revealing significant correlations between trust in automation (TiA), perceptions of automated performance (PAS), and the propensity to defer to AI, particularly in high-stakes scenarios like criminal justice and job-matching. Trust in automation emerged as a primary factor, while moral attitudes moderated deference in ethically sensitive contexts. Our findings highlight the practical relevance of transparency and explainability for supporting critical engagement with AI outputs and for informing the design of contextually appropriate decision support. This study contributes to understanding algorithmic authority as a multidimensional construct, offering empirically grounded insights for designing AI systems that are trustworthy and context-sensitive.
Articolo in rivista - Articolo scientifico
algorithmic authority; algority; decision-making; epistemic authority; human–AI interaction;
English
5-feb-2026
2026
8
2
1
22
36
open
Milella, F., Cabitza, F. (2026). Perceiving AI as an Epistemic Authority or Algority: A User Study on the Human Attribution of Authority to AI. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 8(2), 1-22 [10.3390/make8020036].
File in questo prodotto:
File Dimensione Formato  
MIlella-Cabitza-2026-Mach. Learn. Knowl. Extr.-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 541.27 kB
Formato Adobe PDF
541.27 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/588222
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact