Gaussian Process based Bayesian Optimization, also known as kernelized bandits, is the standard method for sequentially optimizing black-box and expensive to evaluate functions. Its sample efficiency is the reason behind its success. However, due to the need for fitting a Gaussian Process model at each iteration, its own computational cost cubically increases with the number of function evaluations, while instability possibly arises especially in the noise-free setting. Moreover, selecting the most suitable trade-off mechanism between exploration and exploitation is not trivial, and theoretical convergence proofs for well-known acquisition functions only hold under the (impractical) assumption of knowing in advance the specific Reproducing Kernel Hilbert Space which the objective function belongs to. We propose a novel method for fitting the Gaussian Process at a constant time, independently on the number of function evaluations, while also dealing with instability and exploitation-exploration trade-off. This leads to a novel Bayesian optimization framework in which modelling of the objective function and acquisition are simultaneously addressed by fitting the Gaussian Process. Empirical results on a preliminary set of well differentiated test problems show that the proposed algorithm provides a higher effectiveness and efficiency than the most well-known no regret algorithm (i.e., GP-LCB).
Candelieri, A., Signori, E. (2025). MLE-Free Gaussian Process Based Bayesian Optimization. In Learning and Intelligent Optimization 18th International Conference, LION 18, Ischia Island, Italy, June 9–13, 2024, Revised Selected Papers (pp.81-95). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-75623-8_7].
MLE-Free Gaussian Process Based Bayesian Optimization
Candelieri A.;Signori E.
2025
Abstract
Gaussian Process based Bayesian Optimization, also known as kernelized bandits, is the standard method for sequentially optimizing black-box and expensive to evaluate functions. Its sample efficiency is the reason behind its success. However, due to the need for fitting a Gaussian Process model at each iteration, its own computational cost cubically increases with the number of function evaluations, while instability possibly arises especially in the noise-free setting. Moreover, selecting the most suitable trade-off mechanism between exploration and exploitation is not trivial, and theoretical convergence proofs for well-known acquisition functions only hold under the (impractical) assumption of knowing in advance the specific Reproducing Kernel Hilbert Space which the objective function belongs to. We propose a novel method for fitting the Gaussian Process at a constant time, independently on the number of function evaluations, while also dealing with instability and exploitation-exploration trade-off. This leads to a novel Bayesian optimization framework in which modelling of the objective function and acquisition are simultaneously addressed by fitting the Gaussian Process. Empirical results on a preliminary set of well differentiated test problems show that the proposed algorithm provides a higher effectiveness and efficiency than the most well-known no regret algorithm (i.e., GP-LCB).I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.