Heather Ames


My research interests include computational modeling of perceptual systems including speech and languagegrass understanding and visual scene analysis, machine learning and intelligent classification and decision making, and applying academic research projects to real world applications. The technology transfer interests are highlighted in the technology section. I have extensive training and experience in building a variety of neural networks to perform different tasks with a particular strength in the Adaptive Resonance Theory (ART) based models.

One research area that I have been involved with through my PhD and now as a postdoc is computational models of speech perception and production. You can download a copy of my dissertation here. Two projects that I have completed are highlighted below.

Research highlights





Computationally modeling can be used to more effectively study speech and language disorders that result from brain trauma and stroke. Apraxia of speech (AOS) is a disorder of the planning and/or programming of speech production without comprehension impairment and without weakness in the speech musculature. I made use of the DIVA model (Directions into Velocities of Articulators) and the GODIVA model (Gradient Order DIVA) to provide a framework for theorizing about two possible subtypes of AOS.  The first subtype is hypothesized to arise from damage to the inferior frontal sulcus region (IFS). This damage would result in fluent productions of erroneous or misplaced speech sounds.The second subtype is hypothesized to arise from damage to the ventral premotor cortex (vPMC). This damage would result in poorly articulated approximations of the desired syllables. These hypotheses are tested by investigating damage scenarios in DIVA and GODIVA and then comparing these results to behavioral characteristics of AOS patients


I developed the Neural Normalization Network model (NormNet) to explain how the human brain is able to convert speaker-dependent acoustic information into speaker-independent language representations while preserving speaker identity. The speaker-independent representations are categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be rapidly categorized into syllable and word representations and stably remembered by Adaptive Resonance Theory circuits. NormNet is part of an emerging model of auditory streaming and speech categorization. This work was published in JASA.