Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.
Aguzzi, G., Magnini, M., Pengo, M., Viroli, M., Montagna, S. (2025). A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot. In Artificial Intelligence in Medicine 23rd International Conference, AIME 2025, Pavia, Italy, June 23–26, 2025, Proceedings, Part II (pp.1-5). Springer Science and Business Media Deutschland GmbH [10.1007/978-3-031-95841-0_1].
A Fine-Tuning Pipeline with Small Conversational Data for Healthcare Chatbot
Pengo M. F.;
2025
Abstract
Large language models (LLMs) have driven significant advancements in various natural language processing (NLP) tasks, revealing to be a main component in the design of conversational agents. In this paper, we focus on the development of a chatbot aimed at supporting patients in managing their health conditions. In this context, while LLMs are ideal for chatbot development, relying on remote services raises concerns about privacy, reliability and high deployment costs. Smaller models offer a more practical alternative, but they often produce suboptimal results in in-context learning, especially when only limited conversational data are available. To address these challenges, we propose a pipeline for fine-tuning smaller models, thereby enabling style transfer toward physician-like replies. A key component of this pipeline is a data augmentation module that leverages LLMs to generate synthetic data, thus expanding the typically small original dataset of pairs patient questions-physician answers. We evaluate this approach on a hypertension-related conversational dataset, showing that the fine-tuned model outperforms the baseline both in automatic metrics and human evaluation.| File | Dimensione | Formato | |
|---|---|---|---|
|
Aguzzi-2025-Lecture Notes in Computer Science-VoR.pdf
Solo gestori archivio
Tipologia di allegato:
Publisher’s Version (Version of Record, VoR)
Licenza:
Tutti i diritti riservati
Dimensione
944.29 kB
Formato
Adobe PDF
|
944.29 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


