Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.

Aguzzi, G., Magnini, M., Pengo, M., Viroli, M., Montagna, S. (2025). A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot. In Artificial Intelligence in Medicine 23rd International Conference, AIME 2025, Pavia, Italy, June 23–26, 2025, Proceedings, Part II (pp.1-5). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-95841-0_1].

A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot

Pengo M. F.;
2025

Abstract

Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.
paper
Fine tuning; Patient Self-management; Small Language Models;
English
23rd International Conference, AIME 2025 - June 23–26, 2025
2025
Bellazzi, R; Juarez Herrero, JM; Sacchi, L; Zupan, B
Artificial Intelligence in Medicine 23rd International Conference, AIME 2025, Pavia, Italy, June 23–26, 2025, Proceedings, Part II
9783031958403
22-giu-2025
2025
15735
1
5
reserved
Aguzzi, G., Magnini, M., Pengo, M., Viroli, M., Montagna, S. (2025). A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot. In Artificial Intelligence in Medicine 23rd International Conference, AIME 2025, Pavia, Italy, June 23–26, 2025, Proceedings, Part II (pp.1-5). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-95841-0_1].
File in questo prodotto:
File Dimensione Formato  
Aguzzi-2025-Lecture Notes in Computer Science-VoR.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 944.29 kB
Formato Adobe PDF
944.29 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/575462
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact