Explaining Data Integration
by Xiaolan Wang, Laura Haas, Alexandra Meliou
Abstract:
Explanations are an integral part of human behavior: people provide explanations to justify choices and actions, and seek explanations to understand the world around them. The need for explanations extends to technology, as semi-automated and fully-automated systems support crucial activities and increasingly important societal functions. The interpretability of these systems and the ability to explain their decision processes are crucial in developing trust in the systems’ function. Further, explanations provide opportunities for systems to interact with human users and obtain feedback, improving their operation. Finally, explanations allow domain experts and system developers to debug erroneous system decisions, diagnose unexpected outcomes, and improve system function. In this paper, we study and review existing data integration systems with respect to their ability to derive explanations. We present a new classification of data integration systems by their explainability and discuss the characteristics of systems within these classes. We review the types of explanations derived by the various data integration systems within each explainability class. Finally, we present a vision of the desired properties of future data integration systems with respect to explanations and discuss the challenges in pursuing this goal.
Citation:
Xiaolan Wang, Laura Haas, and Alexandra Meliou, Explaining Data Integration, IEEE Data Engineering Bulletin, vol. 41, no. 2, jun 2018, pp. 47–58.
Bibtex:
@article{WangHM2018,
    Abstract = {Explanations are an integral part of human behavior: people provide explanations to justify choices
    and actions, and seek explanations to understand the world around them. The need for explanations
    extends to technology, as semi-automated and fully-automated systems support crucial activities and
    increasingly important societal functions. The interpretability of these systems and the ability to explain
    their decision processes are crucial in developing trust in the systems’ function. Further, explanations
    provide opportunities for systems to interact with human users and obtain feedback, improving their
    operation. Finally, explanations allow domain experts and system developers to debug erroneous system
    decisions, diagnose unexpected outcomes, and improve system function. In this paper, we study and
    review existing data integration systems with respect to their ability to derive explanations. We present
    a new classification of data integration systems by their explainability and discuss the characteristics of
    systems within these classes. We review the types of explanations derived by the various data integration
    systems within each explainability class. Finally, we present a vision of the desired properties of future
    data integration systems with respect to explanations and discuss the challenges in pursuing this goal.},
    Author = {Xiaolan Wang and Laura Haas and Alexandra Meliou},
    Journal = {IEEE Data Engineering Bulletin},
    Number = {2},
    Pages = {47--58},
    Title = {\href{http://sites.computer.org/debull/A18june/p47.pdf}{Explaining Data Integration}},
    Type = {article},
    Venue = {DEBul},
    Volume = {41},
    Year = {2018},
    month = jun,
}