Decisional support systems based on machine learning (ML) in medicine are gaining a growing interest as some recent articles have highlighted the high diagnostic accuracy exhibited by these systems in specific medical contexts. However, it is implausible that any potential advantage can be obtained without some potential drawbacks. In light of the current gaps in medical research about the side effects of the application of these new AI systems in medical practice, in this article we summarize the main unexpected consequences that may result from the widespread application of "oracular" systems, that is highly accurate systems that cannot give reasonable explanations of their advice as those endowed with predictive models developed with ML techniques usually are. These consequences range from the intrinsic uncertainty in the data that are used to train and feed these systems, to the inadequate explainability of their output; through the risk of overreliance, deskilling and context desensitization of their end-users. Although some of these issues may be currently hard to evaluate due to the still scarce adoption of these decisional systems in medical practice, we advocate the study of these potential consequences also for a more informed policy of approval beyond hype and disenchantment

Decisional support systems based on machine learning (ML) in medicine are gaining a growing interest as some recent articles have highlighted the high diagnostic accuracy exhibited by these systems in specific medical contexts. However, it is implausible that any potential advantage can be obtained without some potential drawbacks. In light of the current gaps in medical research about the side effects of the application of these new AI systems in medical practice, in this article we summarize the main unexpected consequences that may result from the widespread application of "oracular" systems, that is highly accurate systems that cannot give reasonable explanations of their advice as those endowed with predictive models developed with ML techniques usually are. These consequences range from the intrinsic uncertainty in the data that are used to train and feed these systems, to the inadequate explainability of their output; through the risk of overreliance, deskilling and context desensitization of their end-users. Although some of these issues may be currently hard to evaluate due to the still scarce adoption of these decisional systems in medical practice, we advocate the study of these potential consequences also for a more informed policy of approval beyond hype and disenchantment.

Cabitza, F., Alderighi, C., Rasoini, R., Gensini, G. (2017). “Handle with care”: About the potential unintended consequences of oracular artificial intelligence systems in medicine. RECENTI PROGRESSI IN MEDICINA, 108(10), 397-401 [10.1701/2802.28353].

“Handle with care”: About the potential unintended consequences of oracular artificial intelligence systems in medicine

Cabitza, F
;
2017

Abstract

Decisional support systems based on machine learning (ML) in medicine are gaining a growing interest as some recent articles have highlighted the high diagnostic accuracy exhibited by these systems in specific medical contexts. However, it is implausible that any potential advantage can be obtained without some potential drawbacks. In light of the current gaps in medical research about the side effects of the application of these new AI systems in medical practice, in this article we summarize the main unexpected consequences that may result from the widespread application of "oracular" systems, that is highly accurate systems that cannot give reasonable explanations of their advice as those endowed with predictive models developed with ML techniques usually are. These consequences range from the intrinsic uncertainty in the data that are used to train and feed these systems, to the inadequate explainability of their output; through the risk of overreliance, deskilling and context desensitization of their end-users. Although some of these issues may be currently hard to evaluate due to the still scarce adoption of these decisional systems in medical practice, we advocate the study of these potential consequences also for a more informed policy of approval beyond hype and disenchantment
Articolo in rivista - Articolo scientifico
machine learning;decision support; health informatics; automation bias;
Italian
2017
108
10
397
401
none
Cabitza, F., Alderighi, C., Rasoini, R., Gensini, G. (2017). “Handle with care”: About the potential unintended consequences of oracular artificial intelligence systems in medicine. RECENTI PROGRESSI IN MEDICINA, 108(10), 397-401 [10.1701/2802.28353].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/185250
Citazioni
  • Scopus 9
  • ???jsp.display-item.citation.isi??? ND
Social impact