Bayesian networks (BN) are versatile probabilistic graphical models that enable efficient knowledge representation and inference. These models have proven effective across diverse domains, including healthcare, bioinformatics, economics, law, and image processing. The structure and parameters of a BN can be obtained by domain experts or directly learned from available data. However, as privacy concerns escalate, it becomes increasingly critical for publicly released models to safeguard sensitive information in training data. Typically, released models do not prioritize privacy by design, and the issue equally affects BNs. In particular, tracing attacks from adversaries can combine the released BN with auxiliary data to determine whether specific individuals belong to the data from which the BN was learned. The current approach to addressing this privacy issue involves introducing noise into the learned parameters. While this method offers robust protection against tracing attacks, it also significantly impacts the model's utility, in terms of both the significance and accuracy of the resulting inferences. Hence, high privacy may be attained, but at the cost of releasing a possibly ineffective model. This paper introduces credal networks (CN) as a novel and practical solution for balancing the model's privacy and utility. Specifically, after adapting the notion of tracing attacks, we demonstrate that a CN enables the masking of the learned BN, thereby reducing the probability of successful tracing attacks. As CNs are obfuscated but not noisy versions of BNs, they can achieve meaningful inferences while safeguarding the privacy of the released model. Moreover, we identify key learning information that must be concealed to prevent attackers from recovering the BN underlying the released CN. Finally, we conduct a set of numerical experiments to analyze how privacy gains can be modulated by tuning the CN hyperparameters. Our results confirm that CNs provide a principled, practical, and effective approach towards the development of privacy-aware probabilistic graphical models.
Rocchi, N., Stella, F., De Campos, C. (2025). Towards Privacy-Aware Bayesian Networks: A Credal Approach. In 28th European Conference on Artificial Intelligence, 25-30 October 2025, Bologna, Italy – Including 14th Conference on Prestigious Applications of Intelligent Systems (PAIS 2025) (pp.5043-5050). IOS Press BV [10.3233/FAIA251419].
Towards Privacy-Aware Bayesian Networks: A Credal Approach
Rocchi N.;Stella F.;
2025
Abstract
Bayesian networks (BN) are versatile probabilistic graphical models that enable efficient knowledge representation and inference. These models have proven effective across diverse domains, including healthcare, bioinformatics, economics, law, and image processing. The structure and parameters of a BN can be obtained by domain experts or directly learned from available data. However, as privacy concerns escalate, it becomes increasingly critical for publicly released models to safeguard sensitive information in training data. Typically, released models do not prioritize privacy by design, and the issue equally affects BNs. In particular, tracing attacks from adversaries can combine the released BN with auxiliary data to determine whether specific individuals belong to the data from which the BN was learned. The current approach to addressing this privacy issue involves introducing noise into the learned parameters. While this method offers robust protection against tracing attacks, it also significantly impacts the model's utility, in terms of both the significance and accuracy of the resulting inferences. Hence, high privacy may be attained, but at the cost of releasing a possibly ineffective model. This paper introduces credal networks (CN) as a novel and practical solution for balancing the model's privacy and utility. Specifically, after adapting the notion of tracing attacks, we demonstrate that a CN enables the masking of the learned BN, thereby reducing the probability of successful tracing attacks. As CNs are obfuscated but not noisy versions of BNs, they can achieve meaningful inferences while safeguarding the privacy of the released model. Moreover, we identify key learning information that must be concealed to prevent attackers from recovering the BN underlying the released CN. Finally, we conduct a set of numerical experiments to analyze how privacy gains can be modulated by tuning the CN hyperparameters. Our results confirm that CNs provide a principled, practical, and effective approach towards the development of privacy-aware probabilistic graphical models.| File | Dimensione | Formato | |
|---|---|---|---|
|
Rocchi et al-2025-FAIA-VoR.pdf
accesso aperto
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
421.41 kB
Formato
Adobe PDF
|
421.41 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


