Oswald Barral, Hyeju Jang, Sally Newton-Mason, Sheetal Shajan, Thomas Soroski, Giuseppe Carenini, Cristina Conati, Thalia Field, Proceedings of the 5th Machine Learning for Healthcare Conference, PMLR 126:813-841, 2020.
Alzheimer’s disease (AD) is an insidious progressive neurodegenerative disease resulting in impaired cognition, dementia, and eventual death. At the earliest stages of the disease, decline in multiple cognitive domains including speech and eye movements occurs, and worsens with disease progression. Therefore, investigating speech and eye movements is promising as a non-invasive method for early classification of AD.
While related work has investigated AD classification using speech collected during spontaneous speech tasks, no prior research has studied the utility of eye movements and their combination with speech for this classification task. In this paper, we present classification experiments with speech and eye movement data collected from 68 memory clinic patients (with a diagnosis of AD, mixed dementia, mild cognitive impairment, or subjective memory complaints) and 73 healthy volunteers completing the Cookie Theft picture description task.
We show that eye tracking data is predictive of AD in a patient versus control classification task (AUC = .73). Furthermore, we show that using eye tracking data for this predictive task is complementary to using speech alone, as combining both modalities yields to the best classification performance (AUC=.80). Our results suggest that eye tracking is a useful modality for classification of AD, most promising when considered as an additional noninvasive modality to speech-based classification.
We show that eye tracking data is predictive of Alzheimer’s Disease in a patient versus control classification task
Photo of the team presenting the paper at the 2020 Machine Learning for Healthcare virtual conference!