The release of ChatGPT to the general public has sparked discussions about the dangers of artificial intelligence (AI) among the public. The European Commission’s draft of the AI Act has further fueled these discussions, particularly in relation to the definition of AI and the assignment of risk levels to different technologies. Security concerns in AI systems arise from the need to protect against potential adversaries and to safeguard individuals from AI decisions that may harm their well-being. However, ensuring secure and trustworthy AI systems is challenging, especially with deep learning models that lack explainability. This paper proposes the concept of Controllable AI as an alternative to Trustworthy AI and explores the major differences between the two. The aim is to initiate discussions on securing complex AI systems without sacrificing practical capabilities or transparency. The paper provides an overview of techniques that can be employed to achieve Controllable AI. It discusses the background definitions of explainability, Trustworthy AI, and the AI Act. The principles and techniques of Controllable AI are detailed, including detecting and managing control loss, implementing transparent AI decisions, and addressing intentional bias or backdoors. The paper concludes by discussing the potential applications of Controllable AI and its implications for real-world scenarios.

Kieseberg, P., Weippl, E., Tjoa, A., Cabitza, F., Campagner, A., Holzinger, A. (2023). Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?. In Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings (pp.1-12). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-40837-3_1].

Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?

Cabitza F.;Campagner A.;
2023

Abstract

The release of ChatGPT to the general public has sparked discussions about the dangers of artificial intelligence (AI) among the public. The European Commission’s draft of the AI Act has further fueled these discussions, particularly in relation to the definition of AI and the assignment of risk levels to different technologies. Security concerns in AI systems arise from the need to protect against potential adversaries and to safeguard individuals from AI decisions that may harm their well-being. However, ensuring secure and trustworthy AI systems is challenging, especially with deep learning models that lack explainability. This paper proposes the concept of Controllable AI as an alternative to Trustworthy AI and explores the major differences between the two. The aim is to initiate discussions on securing complex AI systems without sacrificing practical capabilities or transparency. The paper provides an overview of techniques that can be employed to achieve Controllable AI. It discusses the background definitions of explainability, Trustworthy AI, and the AI Act. The principles and techniques of Controllable AI are detailed, including detecting and managing control loss, implementing transparent AI decisions, and addressing intentional bias or backdoors. The paper concludes by discussing the potential applications of Controllable AI and its implications for real-world scenarios.
paper
AI risks; AI threats; Artificial Intelligence; Digital Transformation; Explainability; Explainable AI; Robustness; Safety; Security; Trustworthy AI;
English
Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023 - 28 August 2023through 1 September 2023
2023
Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings
9783031408366
2023
14065 LNCS
1
12
open
Kieseberg, P., Weippl, E., Tjoa, A., Cabitza, F., Campagner, A., Holzinger, A. (2023). Controllable AI - An Alternative to Trustworthiness in Complex AI Systems?. In Machine Learning and Knowledge Extraction 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Benevento, Italy, August 29 – September 1, 2023, Proceedings (pp.1-12). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-40837-3_1].
File in questo prodotto:
File Dimensione Formato  
Kieseberg-2023-CDMAKE-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 267.43 kB
Formato Adobe PDF
267.43 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/456598
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
Social impact