The Latent Class (LC) model is one of the most well-known latent variable models; it is very popular for the analysis of categorical response variables, and it is typically used to cluster subjects, by assuming the existence of individual-specific latent variables having a discrete distribution. A Hidden (or Latent) Markov (HM) model represents a generalization of the LC model to the case of longitudinal data. It assumes the existence of a discrete latent process generally following a first-order Markov chain, corresponding to subpopulations, usually referred to as latent states. As typically happens for discrete latent variable models, despite maximum likelihood estimation of both LC and HM model parameters can be rather simply performed using the Expectation-Maximization (EM) algorithm, a well-known drawback of this estimation method is related to the multimodality of the log-likelihood function. The consequence is that the estimation algorithm could converge to one of the local maxima, not corresponding to the global optimum. In order to face the multimodality problem described above, we propose a Tempered EM (T-EM) algorithm, which is able to explore the parameter space adequately. It consists in rescaling the objective function depending on a parameter known as the temperature, which controls global and local maxima prominence. High temperatures allow us to explore wide regions of the parameter space, avoiding the maximization algorithm being trapped in non-global maxima; low temperatures, instead, guarantee a sharp optimization in a local region of the parameter space. By properly tuning the sequence of temperature values, the target function is gradually attracted towards the global maximum, escaping local sub-optimal solutions. We rely on an accurate Monte Carlo simulation study to compare the proposal with the standard EM algorithm, evaluating both the ability to hit the global maximum and the computational time of the proposed algorithm. We also show the results for both LC and HM models, using the proposal on discrete and continuous cross-sectional and longitudinal data in connection with some applications of interest. We conclude that the proposal outperforms the standard EM algorithm, significantly improving the chance to reach the global maximum in the overwhelming majority of considered cases. The advantage is relevant even considering the overall computing time.

Brusa, L., Bartolucci, F., Pennoni, F. (2022). Tempered Expectation-Maximization algorithm for discrete latent variable models. In Book of Abstract - The 11th Conference of the IASC-ARS The Asian Regional Section of the International Association for Statistical Computing "Data Science Everywhere: Innovations in Statistical Computing", Imadegawa Campus, Doshisha University, Kyoto, Japan, 21-24 Feb 2022 (pp.21-22). Kyoto : Doshisha University.

Tempered Expectation-Maximization algorithm for discrete latent variable models

Brusa, L;Pennoni, F
2022

Abstract

The Latent Class (LC) model is one of the most well-known latent variable models; it is very popular for the analysis of categorical response variables, and it is typically used to cluster subjects, by assuming the existence of individual-specific latent variables having a discrete distribution. A Hidden (or Latent) Markov (HM) model represents a generalization of the LC model to the case of longitudinal data. It assumes the existence of a discrete latent process generally following a first-order Markov chain, corresponding to subpopulations, usually referred to as latent states. As typically happens for discrete latent variable models, despite maximum likelihood estimation of both LC and HM model parameters can be rather simply performed using the Expectation-Maximization (EM) algorithm, a well-known drawback of this estimation method is related to the multimodality of the log-likelihood function. The consequence is that the estimation algorithm could converge to one of the local maxima, not corresponding to the global optimum. In order to face the multimodality problem described above, we propose a Tempered EM (T-EM) algorithm, which is able to explore the parameter space adequately. It consists in rescaling the objective function depending on a parameter known as the temperature, which controls global and local maxima prominence. High temperatures allow us to explore wide regions of the parameter space, avoiding the maximization algorithm being trapped in non-global maxima; low temperatures, instead, guarantee a sharp optimization in a local region of the parameter space. By properly tuning the sequence of temperature values, the target function is gradually attracted towards the global maximum, escaping local sub-optimal solutions. We rely on an accurate Monte Carlo simulation study to compare the proposal with the standard EM algorithm, evaluating both the ability to hit the global maximum and the computational time of the proposed algorithm. We also show the results for both LC and HM models, using the proposal on discrete and continuous cross-sectional and longitudinal data in connection with some applications of interest. We conclude that the proposal outperforms the standard EM algorithm, significantly improving the chance to reach the global maximum in the overwhelming majority of considered cases. The advantage is relevant even considering the overall computing time.
abstract + slide
Annealing, Global maximum, Hidden Markov model, Latent class model, Local maxima
English
11th Conference of the Asian Regional Section of the International Association for Statistical Computing (IASC-ARS)
2022
Yuichi Mori
Hiroshi Yadohisa; Tomokazu Fujino; Hidetoshi Murakami; Wataru Sakamoto; Fumitake Sakaori; Hirohito Sakurai; Yoshikazu Terada; Makoto Tomita; Hiroshi Yadohisa; Kensuke Okada; Kosuke Okusa; Koji Yamamoto; Michio Yamamoto; Yoshiro Yamamoto; Yoshitomo Akimoto
Book of Abstract - The 11th Conference of the IASC-ARS The Asian Regional Section of the International Association for Statistical Computing "Data Science Everywhere: Innovations in Statistical Computing", Imadegawa Campus, Doshisha University, Kyoto, Japan, 21-24 Feb 2022
2022
21
22
https://iasc-ars2022.org/
open
Brusa, L., Bartolucci, F., Pennoni, F. (2022). Tempered Expectation-Maximization algorithm for discrete latent variable models. In Book of Abstract - The 11th Conference of the IASC-ARS The Asian Regional Section of the International Association for Statistical Computing "Data Science Everywhere: Innovations in Statistical Computing", Imadegawa Campus, Doshisha University, Kyoto, Japan, 21-24 Feb 2022 (pp.21-22). Kyoto : Doshisha University.
File in questo prodotto:
File Dimensione Formato  
Abs_IASC_Kyoto_2022.pdf

accesso aperto

Descrizione: Abstract
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 2.54 MB
Formato Adobe PDF
2.54 MB Adobe PDF Visualizza/Apri
IASC2022_Brusa_Slide.pdf

accesso aperto

Descrizione: Slide of the presentation
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 285.36 kB
Formato Adobe PDF
285.36 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/355780
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact