The Expectation-Maximization (EM) algorithm is undoubtedly one of the most widely used techniques to estimate a discrete latent variable (DLV) model. However, while it is possible to prove that this algorithm converges to a local maximum of the log-likelihood function, there is no guarantee of convergence to the global maximum of this function. We propose two modifications to the EM algorithm to tackle this serious problem. The first one incorporates a tempering scheme into the EM algorithm: the log-likelihood is initially flattened to escape local maxima and then warped back to its original shape in a gradual way. The second uses evolutionary computation to encourage more accurate parameter space exploration. The performance of the resulting tempered EM (T-EM) and evolutionary EM (E-EM) algorithms is assessed for latent class and hidden Markov models in terms of both ability to reach the global maximum and computational time; a comparison with the standard EM algorithm is carried out through an extensive Monte Carlo simulation study. We show that the proposed algorithms outperform the standard EM, significantly increasing the chance of reaching the global maximum in almost all the examined cases. This improvement remains considerable, even accounting for the inflated overall computing time.
Brusa, L., Bartolucci, F., Pennoni, F. (2022). Alternative methods for parameter estimation in discrete latent variable models. In PROGRAMME AND ABSTRACTS 16th International Conference on Computational and Financial Econometrics (CFE 2022) and 15th International Conference of the ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Computational and Methodological Statistics (CMStatistics 2022) (pp.113-113).
Alternative methods for parameter estimation in discrete latent variable models
Luca Brusa;Fulvia Pennoni
2022
Abstract
The Expectation-Maximization (EM) algorithm is undoubtedly one of the most widely used techniques to estimate a discrete latent variable (DLV) model. However, while it is possible to prove that this algorithm converges to a local maximum of the log-likelihood function, there is no guarantee of convergence to the global maximum of this function. We propose two modifications to the EM algorithm to tackle this serious problem. The first one incorporates a tempering scheme into the EM algorithm: the log-likelihood is initially flattened to escape local maxima and then warped back to its original shape in a gradual way. The second uses evolutionary computation to encourage more accurate parameter space exploration. The performance of the resulting tempered EM (T-EM) and evolutionary EM (E-EM) algorithms is assessed for latent class and hidden Markov models in terms of both ability to reach the global maximum and computational time; a comparison with the standard EM algorithm is carried out through an extensive Monte Carlo simulation study. We show that the proposed algorithms outperform the standard EM, significantly increasing the chance of reaching the global maximum in almost all the examined cases. This improvement remains considerable, even accounting for the inflated overall computing time.File | Dimensione | Formato | |
---|---|---|---|
Brusa-2022-ERCIM-slides.pdf
accesso aperto
Descrizione: Slides of the presentation
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Creative Commons
Dimensione
279.95 kB
Formato
Adobe PDF
|
279.95 kB | Adobe PDF | Visualizza/Apri |
Brusa-2022-ERCIM-Abstract.pdf
Solo gestori archivio
Descrizione: Abstract
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
1.08 MB
Formato
Adobe PDF
|
1.08 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.