« Back to Projects list

Explanation Interfaces in Intelligent Interactive Systems

Explanation Interfaces in Intelligent Interactive Systems

One challenge with interfaces that adapt their appearance or behaviour to individual user characteristics is a lack of transparency in their underlying reasoning mechanisms. Since a transparency can lead to users having difficulty trusting and/or predicting system behaviour, this project explores the role of explanations in helping to mitigate these problems. Our studies to date have found that the utility of explanations depends highly on the context and the individual user.

Project Publications

Are Explanations Always Important? A Study of Deployed, Low-Cost Intelligent Interactive Systems

Andrea Bunt, Matthew Lount, and Catherine Lauzon. 2012. Are explanations always important?: a study of deployed, low-cost intelligent interactive systems. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces (IUI '12), 169-178.

Opportunities for user involvement within interface personalization

Andrea Bunt and Michael Terry (2009) Opportunities for user involvement within interface personalization. Proceedings of the IJCAI 2009 Workshop on Intelligence and Interaction.

Mixed-initiative interface personalization as a case study in usable AI

Andrea Bunt, Cristina Conati, and Joanna McGrenere (2009) Mixed-initiative interface personalization as a case study in usable AI. In Artificial Intelligence, Special Issue on Usable AI, 30(4), 58-64

Understanding the Utility of Rationale in a Mixed-Initiative System for GUI Customization

Andrea Bunt, Joanna McGrenere and Cristina Conati. 2007. Understanding the Utility of Rationale in a Mixed-Initiative System for GUI Customization. In Proceedings of the International Conference on User Modeling (UM 2007), 147-156. 

Collaborators

Andrea Bunt

Andrea Bunt

Professor

As well as: Joanna McGrenere, Cristina Conati, Catherine Lauzon