Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data

Cabitza, F., Dui, L., Banfi, G. (2019). PROs in the wild: Assessing the validity of patient reported outcomes in an electronic registry. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 181 [10.1016/j.cmpb.2019.01.009].

PROs in the wild: Assessing the validity of patient reported outcomes in an electronic registry

Cabitza, Federico
;
Banfi, Giuseppe
2019

Abstract

Background and objectives: Collecting Patient-Reported Outcomes (PROs) is an important way to get first-hand information by patients on the outcome of treatments and surgical procedure they have undergone, and hence about the quality of the care provided. However, the quality of PRO data cannot be given for granted and cannot be traced back to the dimensions of timeliness and completeness only. While the reliability of these data can be guaranteed by adopting standard and validated questionnaires that are used across different health care facilities all over the world, these facilities must take responsibility to assess, monitor and ensure the validity of PROs that are collected from their patients. Validity is affected by biases that are hidden in the data collected. This contribution is then aimed at measuring bias in PRO data, for the impact that these data can have on clinical research and post-marketing surveillance. Methods: We considered the main biases that can affect PRO validity: Response bias, in terms of Acquiescence bias and Fatigue bias; and Non-Response bias. To assess Acquiescence bias, phone interviews and online surveys were compared, adjusted by age. To assess Fatigue bias, we proposed a specific item about session length and compared PROs scores stratifying according to the responses to this item. We also calculated the intra-patient agreement by conceiving an intra-interview test-retest. To assess Non-Response bias, we considered patients who participated after the saturation of the response-rate curve as proxy of potential non respondents and compared the outcomes in these two strata. All methods encompassed common statistical techniques and are cost-effective at any facility collecting PRO data. Results: Acquiescence bias resulted in significantly different scores between patients reached by either phone or email. In regard to Fatigue bias, stratification by perceived fatigue resulted in contrasting results. A relevant difference was found in intra-patient agreement and an increasing difference in average scores as a function of interview length (or completion time). In regard to Non-Response bias, we found non-significant differences both in scores and variance. Conclusions: In this paper, we present a set of cost-effective techniques to assess the validity of retrospective PROs data and share some lessons learnt from their application at a large teaching hospital specialized in musculoskeletal disorders that collects PRO data in the follow-up phase of surgery performed therein. The main finding suggests that response bias can affect the PRO validity. Further research on the effectiveness of simple and cost-effective solutions is necessary to mitigate these biases and improve the validity of PRO data
Articolo in rivista - Articolo scientifico
Acquiescence bias; Fatigue bias; Medical registry; Non-Response bias; Patient reported outcomes; Response bias; Validity; Software; Computer Science Applications1707 Computer Vision and Pattern Recognition; Health Informatics
English
17-gen-2019
2019
181
UNSP 104837
reserved
Cabitza, F., Dui, L., Banfi, G. (2019). PROs in the wild: Assessing the validity of patient reported outcomes in an electronic registry. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 181 [10.1016/j.cmpb.2019.01.009].
File in questo prodotto:
File Dimensione Formato  
2019-CMPB.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Dimensione 1.97 MB
Formato Adobe PDF
1.97 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/217945
Citazioni
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 10
Social impact