« Back to Publications list

Supporting User Critiques of AI Systems via Training Dataset Explanations: Investigating Critique Properties and the Impact of Presentation Style

Training data has a profound impact on the performance of Machine Learning (ML) systems. To help interested parties develop informed trust and appropriate reliance, recent work has proposed providing users with training dataset explanations. While this prior work showed promising subjective impacts of increasing system transparency in this manner, little is known as to how users might use this information to critique a system. In this work, we investigate how two presentation styles for such explanations (a narrative-driven Data Story and a Q&A format) support users’ critique of an automated hiring system. Findings from a between-subjects study with 39 participants provide insights into the aspects of training dataset explanations that participants leveraged in their critiques. Our findings also show that presentation style can impact critique emphasis, critique accuracy, and subjective impressions of explanation utility.

https://ieeexplore-ieee-org.uml.idm.oclc.org/document/10714581

A. I. Anik and A. Bunt, "Supporting User Critiques of AI Systems via Training Dataset Explanations: Investigating Critique Properties and the Impact of Presentation Style," 2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Liverpool, United Kingdom, 2024, pp. 134-147, doi: 10.1109/VL/HCC60511.2024.00024.

Related Projects

Data-Centric Explanations

Authors

Ariful Islam Anik

Ariful Islam Anik

PhD Student
Andrea Bunt

Andrea Bunt

Professor