Nowadays Information Systems generate a lot of data for supporting the activities of firms, organisations, and state agencies. While on the one hand such data are primarily collected for realising domain-specific services (e.g., state agencies use data for managing healthcare and retirement contributions) on the other hand domain analysts aim at using such data for studying the dynamics of subjects' behaviours or phenomena over time. Thus, the quality of data plays a key role in ensuring the effectiveness of the overall knowledge discovery process. In such a context, most of the research on data quality is aimed at automatically identifying cleansing activities, namely a sequence of actions able to cleanse a dirty dataset, which are often developed and coded manually requiring a relevant effort for domain-experts. This work is concerned with using AI Planning for both modelling data quality requirements and automatically identifying cleansing activities. To this end, we formalise the concept of cost-optimal Universal Cleanser - a collection of best cleansing actions for each data inconsistency identified - as a planning problem, then we present a motivating government application where data have been cleansed accordingly, making both the source and cleansed datasets publicly available for download.
Boselli, R., Cesarini, M., Mercorio, F., Mezzanzanica, M. (2014). Towards data cleansing via planning. INTELLIGENZA ARTIFICIALE, 8(1), 57-69 [10.3233/IA-140061].
Towards data cleansing via planning
Boselli, R;Cesarini, M;Mercorio, F
;Mezzanzanica, M
2014
Abstract
Nowadays Information Systems generate a lot of data for supporting the activities of firms, organisations, and state agencies. While on the one hand such data are primarily collected for realising domain-specific services (e.g., state agencies use data for managing healthcare and retirement contributions) on the other hand domain analysts aim at using such data for studying the dynamics of subjects' behaviours or phenomena over time. Thus, the quality of data plays a key role in ensuring the effectiveness of the overall knowledge discovery process. In such a context, most of the research on data quality is aimed at automatically identifying cleansing activities, namely a sequence of actions able to cleanse a dirty dataset, which are often developed and coded manually requiring a relevant effort for domain-experts. This work is concerned with using AI Planning for both modelling data quality requirements and automatically identifying cleansing activities. To this end, we formalise the concept of cost-optimal Universal Cleanser - a collection of best cleansing actions for each data inconsistency identified - as a planning problem, then we present a motivating government application where data have been cleansed accordingly, making both the source and cleansed datasets publicly available for download.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.