« Back to Publications list
An Automated Approach to Assessing an Application Tutorial’s Difficulty
Download Publication File
Abstract
Online step-by-step text and video tutorials play an integral role in learning feature-rich software applications. However, when searching, users can find it difficult to assess whether a tutorial is designed for their level of software expertise. Novice users can struggle when a tutorial is out of their reach, whereas more advanced users can end up wasting time with overly simple, first-principles instruction. To assist users in selecting tutorials, we investigate the feasibility of using machine-learning techniques to automatically assess a tutorial’s difficulty. Using Photoshop as our primary testbed, we develop a set of distinguishable tutorial features, and use these features to train a classifier that can label a tutorial as either Beginner or Advanced with 85% accuracy. To illustrate a potential application, we developed a tutorial browsing interface called TutVis. Our initial user evaluation provides insight into TutVis’s ability to support users in a range of tutorial selection scenarios.
Slideshow
Download the An Automated Approach to Assessing an Application Tutorial’s Difficulty slideshow.
Citation
S. A. Sabab, A. Khan, P. K. Chilana, J. McGrenere and A. Bunt, "An Automated Approach to Assessing an Application Tutorial’s Difficulty," 2020 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Dunedin, New Zealand, 2020, pp. 1-10, doi: 10.1109/VL/HCC50065.2020.9127271.
Authors
Shahed Anzarus Sabab
AlumniAdnan Alam Khan
AlumniAndrea Bunt
ProfessorAs well as: Parmit K. Chilana and Joanna McGrenere