Seminars in Hearing Research at Purdue

Students, post docs, and faculty with interests in all aspects of hearing meeting weekly to share laboratory research, clinical case studies, and theoretical perspectives. Topics include basic and translational research as well as clinical practice. Participants are welcome from all of Purdue University including Speech, Language, and Hearing Science (SLHS), Biology (BIO), Biomedical Engineering (BME), Mechanical Engineering (ME), and Electrical Engineering (EE). Seminars provide an ideal venue for students to present their work with a supportive audience, for investigators to find common interests for collaborative efforts, and for speakers from outside Purdue to share their work. This seminar is partially supported by the Association for Research in Otolaryngology.

2017-2018 Talks

Past Talks

LYLE 1028: 1030-1120am (link to schedule)

August 31, 2017

Mike Heinz, PhD (Heinz lab)

The Effects of Inner-Hair-Cell-Specific Dysfunction on Neural Coding in the Auditory Periphery

David Axe, Vijay Muthaiah, Michael Heinz

The goal of this work was to investigate one underlying mechanism of sensorineural hearing loss by selectively perturbing the inner hair cells of the cochlea. This was accomplished by measuring both invasive single-unit and non-invasive evoked neural responses in chinchillas that were administered a specific ototoxic drug, carboplatin. Responses were measured to stimuli ranging from simple tones to more complex sounds, including amplitude- and frequency-modulated tones and broadband noise, which represent fundamental acoustic features of speech and music in real-world environments. This experimental approach made it possible to measure the effects of damage to this specific hair-cell type on peripheral neural processing in isolation (i.e., without the confounding interaction of outer-hair-cell damage that often occurs with noise overexposure). Inner-hair-cell dysfunction produced subtle or no effects on common threshold measurements, but perceptually relevant effects were predicted for suprathreshold sounds. Inner-hair-cell damage has long been viewed primarily in terms of cochlear dead regions (i.e., missing inner hair cells). However, our physiological and anatomical evidence suggests that even remaining functional inner hair cells may have degraded responses that provide less neural information for the perception of complex sounds, but do not affect thresholds in a major way. Our results support the idea that frequency-modulated tones may be effective stimuli for suprathreshold inner-hair-cell diagnostics.

September 7, 2017

Liz Marler (Audiology, NIH-T35 student-research awardee)

Vestibular Evoked Myogenic Potential (VEMP) Test-Retest Reliability in Children

Vestibular evoked myogenic potentials (VEMPs) are short-latency muscle potentials measured from the neck (cervical, cVEMP) or under the eyes (ocular, oVEMP), which provide information regarding otolith organ function: the saccule and utricle, respectively. VEMPs have been shown to be a reliable test of otolith function in adults; however, research has not been done to assess whether VEMPs are reliable in children. Therefore, the purpose of the study was to determine the test-retest reliability of c- and oVEMP testing in children and to identify factors that affect VEMP response characteristics. Twenty-six children and 10 adults participated in this two-part study, which included a variety of VEMP parameters (air and bone stimuli, eyes open and eyes closed), a comfort questionnaire and physical measurements. Results suggest that VEMPs are a reliable test to assess otolith function in children using air and bone conduction stimuli.

September 14, 2017

Ryan Verner (PhD student, BME, Bartlett lab)

Mutual Information in the Auditory Thalamocortical Circuit Diminishes with Loss of Consciousness

In addition to its scientific importance, understanding the mechanisms of loss of consciousness is crucial to the development of intraoperative tools to assess global neural state. We provide empirical support for the information integration theory of consciousness, which attempts to characterize unconsciousness as a state of reduced or uncorrelated information transmission. Four male Sprague-Dawley rats were each implanted with two Neuronexus 16-channel electrode arrays, targeting the medial geniculate body (MGB) and the primary auditory cortex (A1). After recovery, electrical stimulation at 4 different levels, ranging from subthreshold to well above threshold behaviorally, occurred on one of their electrodes while neural responses were recorded on the other. Thalamocortical stimulation consisted of a mock thalamic burst triplet of pulses at 300 Hz in MGB. Corticothalamic stimulation consisted of a singular pulse delivered to deeper layers of A1 (layers 5-6). In addition, responses to simple sounds were recorded, including frequency tuning, rate-level, and click train responses. Neural responses were filtered into both local field potentials and multiunit activity (sorted offline with waveclus2). Near-loss of consciousness was induced via IV administration of sub-hypnotic and just-hypnotic doses of isoflurane (approximately 0.6% or 0.9%) and low doses of the sedative dexmedetomidine (i.v. 0.016 or 0.024 mg/kg/hour). Data were collected in the wakeful state before and after sub- or just-hypnotic levels of unconsciousness were induced. Mutual information was assessed using binwise rates stimulus offset, or by assessing information per bin across all 16 recording channels to assess network information. Preliminary results show a reduction in effective information using both measurement schemes for both agents and not stimulation amplitude dependent. These results suggest that mutual information can be a sensitive measure of brain state, such that clinical

Septmber 21, 2017

Chandan Suresh (PhD student, SLHS, Krishnan Lab)

Search for Electrophysiological Indices of Hidden Hearing Loss

Recent studies in animals suggest that even moderate levels of noise exposure can damage synaptic ribbons between the inner hair cells and auditory nerve fibers without affecting audiometric thresholds, giving rise to the use of the term “hidden hearing loss” (HHL). Given the pervasive exposure to occupational and recreational noise in the general population, it is likely that individuals afflicted with HHL will go unidentified unless sensitive clinical measures are developed to diagnose this condition. To date, the studies employed to characterize HHL in humans have yielded confounding results. For example, Stamper & Johnson (2015) reported that the magnitude of wave I amplitude decrease is related to amount of noise exposure, and suggestive of fewer intact auditory nerve synapses; Liberman et al., (2016) reported enhanced summating potential to action potential ratio in individuals at risk for HHL; and Prendergast et al. (2017) found no differences in ABR or frequency following responses (FFR) in individuals with normal hearing and a wide range of noise exposure history. The objective of the project is to develop sensitive clinical electrophysiologic measures for early detection of HHL. We utilized specific stimulus manipulations that will likely produce a greater degradation of responses (recorded from the different levels-inner ear, auditory nerve, and brainstem) in individuals at high risk for HHL compared to controls, due to loss of synapses and/or neurons. The specific stimulus manipulations include response measures across sound levels, response measure in noise, two different adaptation paradigm (stimulus rate neural adaptation and adaptation recovery for click train paradigm), and changes in the rate of the frequency sweep. Preliminary results are presented here from three experiments. Consistent with previous studies, there were no differences between the low- and high-risk groups in audiometric thresholds or DPOAE amplitude. The high-risk group had significantly lower Wave I amplitude at high sound levels only; a different pattern of amplitude recovery from adaptation; and greater disruption in the encoding of rapid frequency change. These results suggest that certain stimulus manipulations could potentially isolate individuals at risk for HHL.

September 28, 2017

Ankita Thawani (PhD student, BIO, Fekete Lab)

Zika virus tropism in the early developing brain and inner ear             

Zika virus (ZIKV) is an emerging mosquito-borne tropical pathogen that was recently associated with severe congenital defects in fetuses, such as microencephaly, retinopathy and sensorineural hearing loss, 70 years after its discovery. Various cellular, organoid, murine and primate models demonstrate that ZIKV preferentially infects neural progenitor cells and causes increased cell death and reduced proliferation.

Detailed information about the relative permissiveness of the early developing brain is lacking. To address whether all the neural progenitors are equally susceptible to ZIKV, we employed the easily accessible embryonic chicken model. Direct ZIKV injections into the neural tube yielded predominantly periventricular infection within 3 days-post-infection. However, we found regions of heavy infection, or “hot-spots” associated with certain key signaling centers of the brain that are known to secrete morphogens to pattern the neighboring neuroepithelium. We analyzed three such morphogens- Shh, Fgf8 and Bmp7. We observed reduced expression of each when heavily infected with ZIKV, and demonstrated a patterning defect associated with one of them (Shh). Thus, while ZIKV preferentially infects neural progenitors, it also exhibits differential tropism for specific subregions of the developing brain, possibly abating their function(s) during embryonic brain development.

Around 6% of newborns exposed prenatally to ZIKV presented with diminished otoacoustic emissions and auditory brainstem responses, hence indicating sensorineural hearing loss, perhaps originating in the cochlea. A key knowledge gap is to explore the spatial and temporal susceptibility of the developing inner ear to ZIKV infection. ZIKV injection into the chicken otocyst resulted in sensory epithelial infection frequently, with infection found in all the cochleas analyzed at 10 days-post infection. Non-sensory infection was also observed, albeit with lower frequency. The study is still in the preliminary stages and will be extended with E2 to E5 ear injections, along with short-term and long-term analyses of ZIKV infection.  We hope to determine what inner ear cell types are most susceptible at each stage of infection.

October 5, 2017

Ed Bartlett, PhD (Bartlett lab; from Salamanca Spain)

Paribas or Bury Pa? – Age-related changes in the neural representations of voice onset time in the inferior colliculus

The inferior colliculus (IC) integrates a variety of inputs to perform spectrotemporal processing in the primary auditory pathway, including temporal to rate transformations. The temporal to rate transformations occurring in the IC make it important to understand how the auditory pathway is able to adapt to changes in hearing abilities, such as due to aging or noise-induced hearing loss. In this study, we used consonant-vowel sounds that varied in voice onset time (VOT) (ba to pa), either with tokens or the envelopes of those tokens to modulate a noise carrier. Synchronized neural populations were recorded non-invasively as envelope-following responses (EFRs) in young and aged rats. In addition, local field potentials (LFPs) and unit activities were recorded in the inferior colliculus, enabling some measure of input to output transformation within the IC. We found that both EFRs and LFPs were degraded in older animals, even after compensating for hearing thresholds.  However, IC unit activity was similar between young and aged rats in many cases using simple measures such as firing rates. We then tested the related question of whether individual sites or populations were able to discriminate between the different VOT stimuli. A template-matching classification model was generated in which single-trial responses were correlated with aggregate trends. IC units were found to discriminate stimuli above chance but still made errors. Integration over a population of units reduced variability and increased performance. It was found that stimulus discrimination was similar for VOT envelopes modulating a noise carrier, but declined in older animals for the original tokens. These results suggest that there may be multiple mechanisms of compensation to maintain neural representations in older animals, including compensation within the IC.

October 12, 2017

Phil Smith, PhD (Dept. of Neuroscience, University of Wisconsin)

Trouble in paradise. What are LSO cells doing!?

 The brainstem’s lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). For almost 50 years the dogma has been that LSO principal cells act like “sluggish integrators” weighing contralateral inhibition against ipsilateral excitation and making their sustained firing rate a function of the azimuthal position of a sound source. Our in vivo patch clamp recordings from labeled LSO neurons, in the Mongolian gerbil tell a different story. Light and electron microscopic analysis of labeled neurons allowed us to distinguish principal and non-principal LSO neurons and unequivocally assign a given set of response features to a given cell. We find that although both principal and non-principal neurons contribute to LSO tonotopy, principal neurons only respond at sound onset and show fast membrane features suggesting an importance for timing. In contrast, non-principal LSO neurons act more like sluggish integrators often generating sustained responses to sound, and have slower membrane features with larger action potentials. Similarity of current injection and sound-evoked responses suggests that differences in intrinsic properties are primarily responsive for these differences. Remarkably, the almost simultaneous convergence of transient click-evoked ipsilateral excitation and contralateral inhibition provides a mechanism for localizing transient stimuli. Finally, our anatomical evidence indicates that LSO cells may have an influence on the MSO/ITD pathway.

October 19, 2017

Yangfan Liu, PhD (Herrick Acoustics Lab)

Modeling, Reproducing and Active Control of Noise Sources. How to Bring All These Together?

The modeling of an acoustic source usually involves the use of a series of mathematical basis functions to represent the sound field generated by the actual source, and the coefficients (or the parameters) of the basis functions can be estimated based on the sound field measurements at different spatial locations. Once this estimation is done, the sound field at any location can be predicted. This technique can be used in the applications of source identification, the study of source characteristics, sound field reproduction, etc. A reduced order modeling method based on the multipole decomposition of a sound field will be focused on in this seminar along with an introduction of some traditional source modeling methods. The sound field decomposition technique can also be implemented in the active noise control (ANC) applications which allows the system to selectively control certain source contents or important source characteristics with limited computing resources. After a general introduction of active noise control, an ANC method based on the independent sound field component decomposition will be described, which can extract and control certain source components. Some potential use of different modal decomposition methods in the ANC applications will also be mentioned in this seminar.

October 26, 2017

Alex Francis, PhD (Francis lab)

Is it possible to distinguish between listening effort and noise annoyance?

Listening to speech in noise is effortful and can be unpleasant. Current theories of effortful listening attribute listeners’ dissatisfaction with listening to speech in noise to demand on cognitive resources such as working memory and selective attention. However, research on human response to environmental and workplace noise distinguishes between noise annoyance and distraction, separating affective/emotional responses from cognitive/attentional ones. In my lab we have been studying psychophysiological responses to challenging listening situations in an attempt to identify physiological markers that may help to differentiate between different aspects of listening effort and now noise annoyance. Here we present some results from an initial study (Francis, et al. 2016) suggesting that a decrease in blood volume pulse amplitude (BVPA) is stronger when listening to noise-masked speech compared to equally intelligible synthetic speech, suggesting that BVPA may reflect a response specific to interference from noise (i.e. annoyance). We have now extended this research, asking how individual traits such as noise sensitivity interact with ANS responses associated with listening effort, including BVPA, skin conductance level, facial EMG, and heart rate variability. Traits included selective attention, working memory capacity, vocabulary (PPVT-IV), noise sensitivity (NoiseQ), hearing thresholds, and “Big 5” personality traits (BFI-10). Listeners heard 10 short stories and answered questions about them. Listening effort was manipulated in two ways: Half the stories were spoken in non-native accented English, half in native-accented English masked by speech-shaped noise. Signal-to-masker ratio was adjusted for each subject to equate performance across conditions. Preliminary results suggest that that physiological responses did indeed differ across sources of difficulty (accent, noise) even when overall performance is controlled for, but that cognitive, not personality or sensitivity trait factors play a stronger role in determining these patterns. Implications for theories of listening effort and future research will be discussed.

November 2, 2017

Ross Maddox, PhD (Bharadwaj lab visitor; Asst. Prof Biomedical Engineering and Neuroscience, University of Rochester)

New approaches to the auditory brainstem response for the clinic and lab

The auditory brainstem response (ABR) has been an extremely useful tool for studying the early auditory pathway since its discovery in the 1970s. In its most basic form it represents the average evoked scalp potential to a couple thousand repetitions of a short stimulus such as a click. However, despite its clinical value, the ABR does have weaknesses. In the two parts of this talk I will present our efforts to address two of these principal limitations, which we hope will improve and extend the ABR's utility in both the clinic and the lab, respectively.

In the clinic, infant hearing thresholds are estimated by presenting trains of tonebursts to each ear over a range of frequencies and intensities. While each of these toneburst conditions only takes a couple minutes to record, they constitute a large combinatorial space, leading to burdensome overall test durations. We are exploring the possibility of measuring the toneburst ABR at all frequencies in both ears simultaneously. Preliminary data suggest that this may speed up the paradigm, and modeling suggests that it may also provide more place-specific responses at higher stimulus intensities.

In the neuroscience lab, there is significant interest in understanding how subcortical areas process speech. However, the rapidity of the ABR's response components necessitates short evoking stimuli, making these studies difficult to perform. We have recently developed a paradigm for measuring the ABR to continuous, non-repeating, naturally uttered speech. These methods allow the design of engaging behavioral tasks, facilitating new investigations of cognitive processes like language processing and attention in the auditory brainstem.

November 9, 2017

Ryan Verner (PhD student, BME, Bartlett lab)

Electrophysiological, Behavioral, and Histological Assessment of the Thalamocortical Network as a Stimulation Target for Central Auditory Neuroprostheses

Brain-machine interfaces aim to restore natural sensation or locomotion to individuals who have lost such ability. While the field of neuroprostheses has developed some flagship technologies which have enjoyed great clinical success, such as the cochlear implant, it is generally understood that no single device will be ideal for all patients. For example, the cochlear implant is unable to help patients suffering from neurofibromatosis type 2, which is commonly characterized by bilateral vestibular schwannomas for which surgical removal requires transection of the auditory nerve. In an effort to develop stimulatory neuroprostheses which can help the maximum number of patients, research groups have developed central sensory neuroprostheses. However, moving through ascending sensory processing centers introduces more uniqueness of neuronal feature selectivity and greater coding complexity, and chronic implantation of devices becomes less efficacious as the brain’s glial cells respond to implanted devices. In this work, we propose a neuroprosthetic targeting auditory thalamus, specifically the ventral division of the medial geniculate body (MGV). Thalamus represents an information bottleneck through which many sensory systems send information. Primary (MGV) and non- primary (MGD, MGM) subdivisions provide parallel auditory inputs to cortex and receive feedback excitation and inhibition from cortex and thalamic reticular nucleus (TRN), respectively. We characterized the potential of the thalamocortical circuit as a neuroprosthetic target through electrophysiological, behavioral, and histological methods. Preliminary results suggest some features of intracortical microstimulation (ICMS) are more salient than intrathalamic microstimulation (ITMS), such as sensitivity to perceived intensity cues. Additionally, we have identified a profound immune response in MGV to the implanted electrode and propose alternative surgical approaches which may mitigate this response.

November 16, 2017

Josh Alexander, PhD (Alexander lab)

Preliminary data on mechanisms for perception of frequency-lowered speech

Frequency lowering (FL) is a class of advanced digital signal processing techniques designed to help individuals with high-frequency hearing loss by moving the mid- to high-frequency parts of the speech that cannot be heard with conventional hearing aids, to lower frequency regions where hearing is better.  However, clinicians face numerous decisions when setting the parameters that control the frequency range to be lowered and the frequency range where the new information is to be placed.  The appropriate selection of these parameters is critical to patients’ outcomes because no other hearing aid technology has as much ability to alter the identity of individual speech sounds.  Currently, clinicians and researchers lack a clear set of objectives when programming the parameters that control the re-coding of sound.  The latest commercial variant of this technology, adaptive nonlinear frequency compression (ANFC), has two FL states that are conditional on whether the incoming sound has a low- vs. high-frequency emphasis.  ANFC compounds the clinical decision-making problem because clinicians now have to consider how the FL parameters affect the sounds produced by each of these two processing states.

To help develop evidence-based guidelines for optimizing the selection of FL parameters, we have been working to find a perceptual basis for discrimination of speech contrasts processed with ANFC.  Our innovative approach uses a psychoacoustic model for describing the perceptual effects of frequency lowering and uses a computer-based hearing aid simulator that mimics signal processing used in commercial devices.  Our psychoacoustic model is able to account for 80-90% of the variance in speech recognition results obtained from normal-hearing adults.  Recently, we have discovered that a neural metric based on mean rate statistics obtained from an auditory nerve model captures even more of the variance in the perceptual data than its psychoacoustic equivalent.  The next step will be to use the neural model to generate predictions for how a variety of hearing losses will influence speech perception with different ANFC settings.  The expected outcomes of this research will be a set of recommended guidelines for optimizing parameter selection in hearing aids with ANFC and other FL algorithms. 

Speech, Language, & Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, PH: (765) 494-3789

2016 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by SLHS

If you have trouble accessing this page because of a disability, please contact the webmaster at slhswebhelp@purdue.edu.