Cog Assessment, Music, & Alzheimer’s Disease

Currently, it is difficult to assess individuals for mild cognitive impairment (MCI) and Alzheimer’s Disease (AD) based on the infrequency of assessments, confounding alternative diagnoses (such as depression and alternative memory function deficits), a lack of sophistication in the current computerized assessment methods, and social factors, such as fear of performance results, and lack of awareness for information regarding signs of disease onset. Furthermore, many of these difficulties will persist, especially in elderly communities, despite the diagnostic revolution in technically advantageous but cumbersome assessment measures, such as neuroimaging. Using a music composition interface presents value beyond the assessment. Music is a non-invasive alternative to traditional testing that provides an intrinsically rewarding context to encourage patient compliance in the assessment process. –excerpt from PROBLEM STATEMENT, Everyday Technologies for Alzheimer’s Care grant from the Alzheimer’s Association.

In the past year, we have undertaken research to determine to what extent it is possible to design music applications that abstract the music creation process so that the interface a user is operating on can serve two functions: that of a tool to generate creative content, and a neuropsychological test. Our research will not only provide access to music creation, but will do so in a way that the choices a user makes in such an environment are indicative of performance in specific cognitive domains related to Alzheimer’s disease assessment.

Early application effortsOur early application efforts involved the users designing sounds that could then be layered into something like an audio environment. During the sound creation process, the sounds being created would be paired with abstract 3-dimensional images, simultaneously generated from the parameter changes expressed by the users. Between the act of generating audio and using it in a larger work, the user would be asked to remember pairs of associations between the created audio and the 3-dimensional images.

Most importantly, because sound parameter changes were distributed over a multitude of states accessible one after another, it is virtually impossible to substitute a different type of sound source into the application, which the user would then manipulate accordingly. For instance, the application is good for changing computerized sounds over many sets of parameter changes, where parameter changes are relatively equally important. What happens when our neuropsychological validation yields that we need to be working with pre-recorded melodies from the subject’s past musical experiences? The current arrangement of parameter edits where each parameter receives equal and random distribution in a series of editing steps no longer seems appropriate. Furthermore, the interface suffered from other design flaws, such as a limited set of user inputs being remapped to different functions at each step of the application, making it virtually impossible to quickly make any changes as necessitated by creative demands.

Currently, we are pursuing the cognition work required to validate a limited scope of auditory stimuli. If we propose to do any type of cognitive measurement with music, what does that even mean? What music? The state-of-the-art in the field of neuropsychological assessment hasn’t moved that far beyond the pencil and paper tests that have been used over the past several decades, whether or not the tests are displayed on computerized interfaces. The reason for this lack of innovation is well founded, namely, the simple stimuli used in cognitive testing are good scientific stimuli. They lack extraneous and confounding variables. They test well across the population. They are stable with a range of IQ, education, activity in daily life, and economic opportunity. Music would seem to be out of the question. On the contrary, what we are finding is tests that engage multiple domains of congition, auditory and visual-spatial domains, for example, we may be able to use complex and culturally engrained music with no penalty, as it relates to a dominating non-auditory component of the test.

In our testing, visual-spatial locations are flipped one after another. Each time a location is flipped, a snippet of audio escapes. Users are required to remember the location that a particular audio snippet came from. After a set number of locations have been flipped, the order of presentation is randomized and the user begins to hear the audio snippets one after another. When they hear an audio snippet, they click on the location on the screen that originally opened to reveal that audio. It’s like a combination of Simon and Concentration.

What we are finding is, first, this test is really hard. At about four and five locations people start to lose it. Second, we are finding that musicians and non-musicians are performing similarly, despite their different experiences with musical memory and familiarity with the types of material. For instance, in one trial, we’re presenting users with very small clips of mozart string quartets. In our musician group, musicians were well versed in this material, and could identify the composer without prompting. However, in the testing environment, expert status did not convey any advantage whatsoever. We hypothesize that in a condition where the domain of expertise is subservient to a dominating and difficult task from a domain without expertise, that the domain of expertise, in this case, music, does not convey an advantage.

This is very promising for our application design efforts. If we can further test our hypothesis to define the context in which domain of expertise does and does not convey an advantage over rote cognitive testing, we may be able to lay the groundwork for using diverse and rich application environments to window into basic cognitive mechanisms.