“From Cochlea to Cortex”
University of Tübingen, Germany
Dr. Marlies Knipper is the Head of Molecular Physiology of Hearing at the University of Tübingen, in the Department of Otorhinolaryngology, Head and Neck Surgery. Her area of specialization is research on causes of congenital deafness; clarifying the molecular relationship between deafness and hardness of hearing in newborns as a result of transient epigenetic disorders during pregnancy; research on tinnitus, age-induced and noise-induced hearing loss and neuropathies), and the relation of hearing and cognition. Dr. Knipper’s primary interest currently lies with the creation of an infrastructure platform for the more efficient use of cross-system (translational) research on the different senses.
Identification of functional biomarkers of tinnitus and tinnitus/hyperacusis in patients
Tinnitus is as a symptomatic malfunction of our hearing system, where phantom sounds are perceived without acoustic stimulation.
In recent years, we have developed a fingerprint for tinnitus and recently hyperacusis using a combination of behavior animal models for tinnitus/hyperacusis and electrophysiological as well as molecular approaches in the peripheral and central auditory system. The characteristic features that distinguished equally hearing impaired animals with and without tinnitus or hyperacusis are described (Möhrle et al., 2019, Hofmeier et al., 2018, Knipper et al 2013; Rüttiger et al 2013, Singer et al 2013). We aimed to test the knowledge for patients with tinnitus only and tinnitus with co-occurrence of hyperacusis.
Here we present a clinical pilot studies in hearing-impaired subjects with and without tinnitus in comparison to tinnitus with co-occurrence of hyperacusis. We use audiometric measurements, the analysis of body fluids, and functional magnetic resonance tomography (fMRI) analyzing evoked BOLD fMRI and resting state r-fcMRI.
The results in defined patient groups are discussed in the context of previous findings gained in animals.
This work was supported by the Deutsche Forschungsgemeinschaft DFG KN 316/10-1; RU 713/3-2 (FOR 2060; RU 316/12-1, KN 316/12-1 (SPP 1608); KN 316/13-1.
University of Western Ontario, Canada
Associate Professor, Dept. Anatomy & Cell Biology, Schulich School of Medicine & Dentistry, University of Western Ontario, Canada.
Over the past 10 years, my research has focused on basic questions investigating how the cortex integrates information from more than one sense (e.g., vision and hearing), as well as on clinically-relevant questions as to how the cortex adapts to hearing loss and its perceptual implications. In addressing these research themes, I have used numerous animal models (mice, rats, ferrets and cats) and a combination of techniques, including electrophysiological recordings from cortical neurons, as well as a variety of behavioural paradigms, ranging from reflexive tests of sensory-motor gating to perceptual judgment tasks requiring executive function. With respect to the neuroplasticity induced by hearing loss, my lab has taken a multi-faceted approach that ranges from in vitro investigations of the sensory cells in the inner ear, all the way up to studying cortical processing at the level of single neurons, local microcircuits and sensory perception. It remains a long-term goal of my research program to reveal the brain circuits and cellular mechanisms that contribute to the perceptual consequences commonly associated with hearing loss-induced brain plasticity.
Crossmodal Plasticity in Auditory, Visual and Multisensory Cortical Areas Following Noise-Induced Hearing Loss
Following hearing loss, crossmodal plasticity occurs whereby there is an increased responsiveness of neurons in the deprived auditory system to the remaining, intact senses (e.g., vision). Using electrophysiological recordings in noise-exposed rats, our recent studies have revealed that crossmodal plasticity is not restricted to the core auditory cortex; higher-order auditory regions as well as visual and audiovisual cortices show differential effects following noise-induced hearing loss. Unexpectedly, the cortical area showing the greatest relative degree of multisensory convergence post-noise exposure transitioned away from the normal audiovisual area toward a neighboring, predominantly auditory area. Thus, our collective results suggest that crossmodal plasticity induced by adult-onset hearing impairment manifests in higher-order cortical areas as a transition in the functional border of the audiovisual cortex. Our ongoing studies have begun to reveal the implications of this crossmodal plasticity on the rats’ ability to perceive the precise timing of audiovisual stimuli using novel behavioral tasks that are consistent with studies of perceptual judgement in humans.
Sunnybrook Research Institute, University of Toronto, Canada
Andrew Dimitrijevic is a scientist at the Sunnybrook Health Sciences Centre, Department of Otolarygology, Head and Neck Surgery, Sunnybrook Research Institute. He is also faculty at the University of Toronto, Departments of Otolarygology, Head and Neck Surgery, Institute of Medical Sciences, Program in Neuroscience.
Dr. Dimitrijevic completed his PhD at the University of Toronto under the supervision of Terry Picton. He went on to postdoctoral positions at the University of British Columbia under the supervision of David Stapells and University of California, Irvine under the supervision of Arnie Starr. Dr. Dimitrijevic was faculty at Cincinnati Children’s Hospital Medical Center before coming to Sunnybrook.
Dr. Dimitrijevic uses high density EEG recordings to understand sensory and cognitive aspects of hearing in both normal hearing and hearing impaired populations. Web site: http://www.cibrainlab.com/
Cortical oscillations in hearing loss: An emerging field with emerging concepts
Performing even a simple audiogram requires a number of cognitive tasks such as selective attention, motivation and working memory. The neural mechanisms of these cognitive processes are slowly emerging. In recent years there has been an explosion in the interest in brain oscillations in auditory cognition research. Coupled with an increased awareness that cognition plays a crucial role everyday communication, such as listening to speech in noise or tackling the cocktail party problem has made the field of brain oscillations and audition ripe for investigation. While classic early evoked potentials provide excellent indices of sensory encoding, induced brain oscillations appear to index higher order cognitive tasks such as attention and working memory. With hearing loss, the compensatory role of cognition as indexed with brain oscillations as a result of reduced sensory fidelity has begun to be examined. Brain rhythms spanning delta, theta, alpha, beta and gamma frequencies appear to play specific roles in hearing cognition. This talk will provide an overview of these brain rhythms in normal audition and compensatory roles with hearing loss.
University of Calgary, Canada
Born 1942. M.Sc in physics (1967), Ph.D. in biophysics (1972). Research Associate Department of Otorhinolaryngology at Leiden University in the Netherlands (1972-1978) interrupted from 1976-1977 as research fellow at the House Ear Institute in Los Angeles, California. !978-1986 professor in Biophysics, Nijmegen University Netherlands. 1986-2013 Professor in Psychology, Physiology and Pharmacology at the University of Calgary, Alberta, Canada. Alberta Heritage Foundation for Medical Research Scholar and Scientist (1986-2013). 1997-2013 Campbell McLaurin chair for Hearing Deficiencies. 2013-present Emeritus Professor at the University of Calgary.
Published >220 peer reviewed articles; ~ 100 book chapters and 6 single authored books and 4 edited books. Received >20,000 citations (Google Scholar), h-factor = 81.
- Elected corresponding member of the Royal Netherlands Academy of Arts and Sciences (1989).
- Elected Fellow of the Acoustical Society of America (1998).
- Elected Fellow of the Royal Society of Canada (2014)
- Editor-in-Chief of “Hearing Research” (2005-2010)
Hearing Loss and the Brain
Hearing loss is in in the ear, but hearing problems originate in the brain. This suggests that the loss of auditory neural activity that enters the central auditory system thereby alters it functioning. Specifically, hearing loss causes tonotopic map changes in thalamus and cortex, not at more peripheral subcortical structures, likely as a result of changes in the balance between excitation and inhibition, which may also cause central gain changes. Hearing loss is also known to increase spontaneous firing rates and neural synchrony in cochlear nucleus, midbrain, thalamus and auditory cortex, but also in non-classical auditory sensitive areas. Sever hearing loss, for instance at > 8 kHz, results in atrophy of part of auditory cortex, and also in prefrontal cortical areas related to executive functions. In addition to these changes, the ‘auditory connectome’ may be changed either directly through deafferentation, but also through increasing demands on cognitive processes such as attention and memory to make sense of the deteriorated acoustic signals resulting from hearing loss. These plastic changes can also result in tinnitus and hyperacusis, and potentially in advancing the onset of mild cognitive impairment.
Mass Eye & Ear, USA
Charles Liberman, Ph.D. is the Schuknecht Professor of Otology and Laryngology at the Harvard Medical School and the Director of the Eaton-Peabody Laboratories at the Massachusetts Eye and Ear Infirmary. Dr. Liberman received his B.A. in Biology from Harvard College in 1972 and his Ph.D. in Physiology from Harvard Medical School in 1976. He has been on the faculty at Harvard since 1979, has published over 180 papers on a variety of topics in auditory neuroscience and is the recipient of the Award of Merit from the Association for Research in Otolaryngology, the Carhart Award from the American Auditory Society and Bekesy Silver Medal from the Acoustical Society of America. His research interests include 1) coding of acoustic stimuli as neural responses in the auditory periphery, 2) efferent feedback control of the auditory periphery, 3) mechanisms underlying noise-induced and age-related hearing loss, 4) the signaling pathways mediating nerve survival in the inner ear and 5) application of cell- and drug-based therapies to the repair of a damaged inner ear.
University of Manchester, United Kingdom
Chris Plack was educated at the University of Cambridge, where he obtained a BA in 1987 and a PhD in 1990. He is currently Ellis Llwyd Jones Professor of Audiology at the University of Manchester, UK, and Professor of Auditory Neuroscience at Lancaster University, UK, where he studies the physiology and psychology of normal and impaired human hearing. In 2003, he was elected a Fellow of the Acoustical Society of America.
Cochlear Synaptopathy in Humans
The audiogram is easy to obtain, but is relatively insensitive to auditory damage, particularly damage to the auditory nervous system. Results from rodent and primate models suggest that noise exposure can cause a dramatic loss of connections between inner hair cells and auditory nerve fibers without affecting threshold sensitivity. This disorder has been called cochlear synaptopathy or “hidden hearing loss,” because it is not thought to be detectable using pure tone audiometry. If confirmed in humans, this disorder could have major implications for the prevention, diagnosis, and management of noise-induced hearing loss. However, the evidence for cochlear synaptopathy due to noise exposure in humans is mixed, and there is little evidence that synaptopathy causes listening difficulties in young adults with normal audiograms. It is possible that humans are less vulnerable to synaptopathy than animals, and significant noise-induced synaptopathy may only occur in combination with an audiometric loss. The effects of cochlear synaptopathy may be more important in older listeners. Aging is associated with synaptopathy in rodent models, and there is now good histological evidence for substantial age-related synaptopathy in humans. The perceptual consequences of this pathology are unclear, but synaptopathy could contribute to age-related declines in listening ability, particularly speech perception in noise which is poorly explained by the clinical audiogram.
Dalhousie University, Canada
1978.3-1981.3 Nanjing Medical College, B.S.
1986-9- 1989.7 East-South University, Nanjing, China, Auditory Physiology, M.Sc
1996.8-1998.8 State University of New York at Buffalo, Audiology MA
1996.8-2000.9 State University of New York at Buffalo, Hearing Science, Ph.D.
- Cochlear protection by genetic manipulation.
- The working mechanisms of ribbon synapses.
- Noise induced synaptic damage in cochlea and the impacts of this damage on hearing functions.
- Impact of hearing loss on cognitive functions.
Noise-induced synaptopathy resulting from repeated noise exposure at different sound levels
Noise-induced synaptopathy (NIS) has become a serious concern as a possible effect of noise induced hearing loss (NIHL) since a massive synapse loss was reported following a brief noise exposure that did not cause permanent threshold shift (PTS). However, the impact of NIS on hearing may have been exaggerated. Coding-in-noise deficits (CIND) have been predicted to be the major consequence of the synaptopathy. But to date, there is no robust evidence for CIND after such noise exposures. Noise-induced synapse loss is likely reversible, at least in part. In recent studies, we failed to find CIND after such a noise exposure by testing cochlear responses to amplitude modulation. Further evidence is reported here from two experiments. In Exp. 1, both C57 mice and guinea pigs were given two brief noise exposures. A much smaller synapse loss was seen immediately (1d) after the second noise exposure and did not result in a significant increase in permanent synapse loss when tested 4 weeks after the 2nd noise exposure. In Exp. 2, noise exposure was given to guinea pigs at a lower level (91 dB SPL), but repeated to reach equal energy to the brief noise exposure that caused significant synapse loss in this species (106 dB SPL, 2 hrs). No synapse loss was seen after the repeated noise at the low level. This suggests that the equal-energy hypothesis for NIHL is not valid for NIS, which is not likely to occur after noise exposures at the up limit under the current safety standards.
University of Western Australia, Australia
Helmy Mulders is an auditory neuroscientist with a particular interest in centrifugal control and plasticity. The last 9 years her focus has been the study of the neural substrate of tinnitus in an animal model, using a variety of techniques such as single neuron electrophysiology, behavioural studies, immunocytochemistry and RT-PCR. She works in the Auditory Laboratory at the University of Western Australia (UWA) and has published >55 peer reviewed journal articles and book chapters. She is a full-time academic, coordinating and teaching into the undergraduate and postgraduate Neuroscience programs and the Master of Clinical Audiology at UWA.
Central plasticity after hearing loss – therapeutic implications for tinnitus
Tinnitus is a common phantom auditory perception that can severely affect quality of life. The precise neural mechanisms remain as yet unknown which is likely to be a contributing factor to the fact that there is no cure. Tinnitus is strongly associated with cochlear trauma and hearing loss, which evokes plasticity in the central auditory system, resulting in altered levels and patterns of spontaneous activity. It has been suggested that tinnitus is generated from these alterations in neural activity in combination with changes in non-auditory regions such as frontostriatal circuitry. This latter circuitry may be involved in sensory gating of non-salient information at the level of the thalamus. Therefore, a breakdown of this mechanism could potentially cause altered neural signals in the auditory system to reach the cortex, leading to perception. In our laboratory, we use rat and guinea pig models of cochlear trauma and tinnitus to investigate the relationship between frontostriatal circuitry and the auditory system and the mechanisms of sensory gating. Electrophysiological recordings in auditory thalamus in animals with and without cochlear trauma and/or tinnitus are combined with stimulation of elements of the frontostriatal circuitry. Stimulation is achieved invasively by focal electrodes or non-invasively by repetitive transcranial magnetic stimulation. Our results demonstrate that activation of frontostriatal circuitry has a functional effect on activity in auditory thalamus and that this effect changes after cochlear trauma. Our data support the notion that sensory gating is involved in tinnitus generation which has implications for potential therapeutic targets.
Sunnybrook Research Institute, University of Toronto, Canada
Dr. Dabdoub is the research director of the Hearing Regeneration Initiative at Sunnybrook Research Institute and an associate professor in the Department of Otolaryngology Head & Neck Surgery and Department of Laboratory Medicine at the University of Toronto, Canada.
Dr. Dabdoub’s research program focuses on discovering and elucidating the molecular signaling pathways involved in the development of the mammalian inner ear. The goal of his laboratory is to connect developmental biology to inner ear diseases and ultimately to regenerative medicine for the amelioration of hearing loss through cellular regeneration of sensory hair cells and primary auditory neurons.
Connecting the Cochlea to the Brain: Development and Regeneration of the Primary Auditory Neurons
Primary auditory neurons, also known as spiral ganglion neurons, are responsible for transmitting sound information from cochlear sensory hair cells in the inner ear to cochlear nucleus neurons in the brainstem. Auditory neurons develop from neuroblasts delaminated from the proneurosensory domain of the otocyst and keep maturing until the onset of hearing. These neurons degenerate due to noise exposure and aging resulting in permanent hearing impairment. Thus, auditory neurons are a primary target for regeneration for the amelioration of hearing loss. Glial cells surrounding auditory neurons originate from neural crest cells and migrate to the spiral ganglion during development. These glial cells survive after neuron degeneration and loss making glial cells ideal for gene therapy and cellular reprograming.
Using combinatorial coding, we have successfully converted glial cells into induced neurons in vitro and assessed the induced neurons using morphology, immunohistochemistry, their ability to innervate peripheral and central targets, as well as transcriptomic analyses comparing their properties to endogenous auditory neurons and control cells. Furthermore, we have developed a preclinical mouse model of neuropathy with the aim of converting glial cells in vivo. Neuron replacement therapy would have a significant impact on research and advancements in cochlear implants as the generation of even a small number of auditory neurons would result in improvements in hearing.
McMaster University, Canada
Ian C. Bruce, Ph.D. is a Professor and Associate Chair of Graduate Studies in Electrical & Computer Engineering at McMaster University in Hamilton, Ontario, Canada. He is engaged in interdisciplinary research and academic activities in electrical & biomedical engineering, neuroscience, psychology, and music cognition. His research is focused on applying cutting-edge experimental and computational methods to better understand, diagnosis and treat hearing disorders. Research applications pursued by his lab include hearing aids, cochlear implants, diagnosis & treatment of tinnitus, speech & music perception, digital speech processing, and genetic hearing loss.
Dr. Bruce received the B.E. (electrical and electronic) degree from The University of Melbourne, Australia, in 1991, and the Ph.D. degree from the Department of Otolaryngology, The University of Melbourne in 1998. From 1993 to 1994, he was a Research and Teaching Assistant at the Department of Bioelectricity and Magnetism, Vienna University of Technology, Vienna, Austria. He was a Postdoctoral Research Fellow in the Department of Biomedical Engineering at Johns Hopkins University, Baltimore, MD, USA, from 1998 to 2001, before moving to McMaster in 2002. Dr. Bruce is an Associate Editor of the Journal of the Acoustical Society of America, a Fellow of the Acoustical Society of America, a Member of the Association for Research in Otolaryngology, and a Registered Professional Engineering in Ontario.
Computational modeling of diverse forms of cochlear pathology
Computational models of auditory processing can be useful tools in understanding the normal function of the ear and the auditory pathways of the brain. In addition, computational models that can incorporate pathology may be helpful in understanding the effects of hearing impairment and in the development of improved devices for those with hearing loss, such as hearing aids and cochlear implants. However, incorporating pathology into physiological models of auditory processing faces some difficulties including: i) incomplete accuracy in even explaining normal function, ii) limited physiological detail regarding the site of the pathology, and/or iii) uncertainty in explaining a human subject’s experimental data due to a lack of definite knowledge about the pathology that they have.
In this talk, I will review efforts by a number of research groups, including my own, to develop, validate and apply models of a diverse range of cochlear pathologies. Methodologies for modeling outer hair cell impairment, inner hair cell impairment, cochlear synaptopathy, and pathologies caused by genetic mutations will be explored. Approaches to overcoming uncertainties about patterns of pathologies in human subjects will also be discussed.
Harvard Medical School, USA
Daniel is the son of two fashion designers who raised him in Connecticut surrounded by other designers and creative types. Daniel studied English literature and Psychology at a liberal arts college in Virginia (Univ. Richmond) and didn’t discover science until the end of his university days, while studying abroad in Argentina. Daniel moved west for his graduate education, receiving his PhD in Biological Sciences at the University of California Irvine and carried out postdoctoral research on cortical plasticity with Mike Merzenich at the University of California, San Francisco. After a few years on the faculty at Vanderbilt, Daniel moved his lab to Mass. Eye and Dear in 2010 and joined the Dept. of Otolaryngology at Harvard Medical School. Daniel is now an Associate Professor at Harvard Medical School, the Director of the Lauer Tinnitus Research Center and holds the Amelia Peabody Research Chair in Otolaryngology at Massachusetts Eye and Ear.
Neural circuit logic for modulation and gain control in the auditory cortex
Sensory brain plasticity exhibits a fundamental duality, a yin and yang, in that it is both a source and possible solution for various types of perceptual disorders. When signaling between the ear and the brain is disrupted, the balance of excitation and inhibition tips toward hyperexcitability throughout the central auditory neuroaxis, increasing the ‘central gain’ on afferent signals so as to partially compensate for a diminished input from the auditory periphery. By performing chronic 2-photon imaging from ensembles of genetically identified cortical cell types before and after auditory nerve damage, we can track the imbalance between excitatory and inhibitory networks following cochlear neuropathy and relate these changes to perceptual difficulties encoding signals in noise. These findings underscore the ‘yin’, the dark side of brain plasticity, wherein the transcriptional, physiological and neurochemical changes that compensate for the loss or degradation of peripheral input can incur debilitating perceptual costs. We are also committed to understand the ‘yang’ of brain plasticity, how the remarkable malleability of the adult brain can be harnessed and directed towards an adaptive – or even therapeutic – endpoint. Our ongoing research suggests that a cluster of cholinergic cells deep in the basal forebrain may hold the key to adjusting the volume knob in hyperactive cortical circuits. We use a combination of optogenetics, photometry and large-scale single unit recordings to tune into the dialog between the basal forebrain and key cell types in the auditory thalamus and auditory cortex of awake mice engaged in active listening tasks. In parallel, we use high-channel EEG, pupillometry and psychophysical approaches to reveal the hidden neural signatures of tinnitus and disordered speech perception in human subjects. These efforts have culminated in the development of audiomotor training platforms that suggest new treatment avenues for various classes of perceptual disorders.
Harvard Medical School, USA
Sharon G. Kujawa, Ph.D. is an Associate Professor of Otolaryngology, Harvard Medical School. She is the Director of Audiology Research and a Senior Scientist in the Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA. Work in the Kujawa laboratory seeks to clarify mechanisms and manifestations of common forms of acquired sensorineural hearing loss in humans, particularly those due to aging and exposure to noise and ototoxic drugs. A major focus of current work is in understanding how these etiologies cause loss of cochlear synapses, determining the functional consequences of that loss, and how the degeneration can be manipulated pharmacologically to reveal mechanisms and provide treatments.
Noise-Induced Cochlear Synaptopathy With and Without Sensory Cell Loss
Noise exposure is a primary cause of acquired sensorineural hearing loss affecting many millions, worldwide. After decades of focus on the sensory hair cell component of noise-induced hearing loss, animal studies have more recently begun to address peripheral neural consequences of such exposure. This work has identified the loss of inner hair cell synapses with cochlear afferent neurons as a common and early manifestation of noise damage, across all mammalian species evaluated thus far. Our early studies of noise-induced cochlear synaptopathy concentrated on exposures producing large but reversible threshold shifts without hair cell loss. This model provided a powerful approach to initial studies because it allowed a separation of the functional deficits due to synaptopathy from those due to hair cell loss, and because clues present in suprathreshold responses could be interpreted without an audibility confound. However, noise can produce temporary and/or permanent threshold elevations, with and without hair cell loss, depending on characteristics of the exposure and susceptibilities of the individual. Thus, although the synaptopathy can be hidden in a normal audiogram, the real challenge to diagnosis may be in mixed – neural plus sensory—pathology. Here, we consider cochlear structure and function after noise exposure with and without sensory cell loss.
University at Buffalo, USA
Richard Salvi, SUNY Distinguished Professor and Director for the Center for Hearing and Deafness at the University at Buffalo, is an auditory neuroscientist with a long standing interest in noise-induced hearing loss, ototoxicity, aging, tinnitus, hyperacusis, functional brain imaging, neuroplasticity, cell death, regeneration and otoprotection.
Animal Models of Loudness Hyperacusis, Sound Avoidance Hyperacusis and Pain Hyperacusis
Hyperacusis is a debilitating condition in which everyday sounds are perceived as extremely loud, sometimes painful resulting in sound avoidance. To test for loudness hyperacusis, we measured reaction time vs. intensity (RT-I) function in rats. RTs decreased with increasing intensity. Sodium salicylate treatment caused RT-I functions to become steeper and RTs shorter than normal, behaviors indicative of loudness hyperacusis. High-frequency, noise-induced hearing loss induced loudness recruitment at high frequencies, but hyperacusis at the low frequencies. To test for sound-avoidance hyperacusis, rats were placed in an arena with a bright, open area and a dark enclosure equipped with loudspeaker. Normal rats spent ~95% of the time in the dark, but when sound was introduced into the dark box, the rats shifted their preference to the bright area. After inducing a high-frequency hearing loss, sound stimulation became even more effective at driving the rats from the dark box into the open area. To determine if sound could modulate pain, we measured the time for a rat to remove its tail from hot water in the absence or presence of sound. Tail withdrawal latency increased for intensities from 80 to 90 dB indicating that moderate-intensity sounds increased thermal pain tolerance (i.e., hypoalgesia). However, withdrawal latencies decreased at higher intensities. At 120 dB, latencies were significantly shorter (i.e., decreased pain tolerance, hyperalgesia). These results provide evidence for multisensory integration between auditory and pain pathways. Prior opioid treatment caused sounds to enhance thermal pain. These behavioral paradigms new tools to study various forms of hyperacusis.
University at Buffalo, USA
Matthew Xu-Friedman grew up in La Jolla, California, and attended Yale University for a bachelor’s degree in Psychobiology. He received his PhD from Cornell University in the Section of Neurobiology and Behavior for work on the knollenorgan sensory system of electric fish with Carl Hopkins. He did a postdoc in the Department of Neurobiology at Harvard Medical School in synaptic physiology of the cerebellum with Wade Regehr. He is currently a Professor in the the Department of Biological Sciences at the University at Buffalo, State University of New York, where his lab focusses on the physiology and function of auditory nerve synapses.
Sound-driven changes in auditory nerve synaptic properties
Abnormal acoustic stimulation can lead to disorders, such as tinnitus and language processing disorders. To treat these disorders, it is important to determine how acoustic conditions affect cells and synapses of the auditory pathway. We studied the effects of increased and decreased acoustic stimulation on synapses made by auditory nerve fibers, and found changes in synaptic depression and size. Changes in depression are consistent with changes in presynaptic calcium influx. These changes had a significant impact on the fidelity of the synapse in vitro, and appear to optimize synaptic properties to activity levels in vivo.
Karolinska Institute, Sweden
Dr. Canlon’s laboratory is working to understand the normal hearing process and causes of hearing deterioration as a step toward the prevention of hearing loss. In an effort to learn how hair cells and nerve fibers become damaged, Dr. Canlon’s group is conducting molecular experiments to identify key players in this process. Dr. Canlon has recently discovered that the cochlea contains a self-sustained circadian clock, which continues to tick in culture. The current research focus is to understand the molecular mechanisms through which the circadian clock regulates cell and organismal metabolism and the reciprocal feedback of metabolism on circadian oscillators in the inner ear. We anticipate that a better understanding of clock processes will lead to innovative therapeutics for a spectrum of auditory disorders.
Dr. Canlon has been head of the Experimental Audiology Section at the Karolinska Institute for the past 25 years and has had numerous major administrative duties at the Karolinska Institute. She is currently Editor-in-Chief for Hearing Research. She received her bachelor degree from Brooklyn College, City University of New York and then her Master´s at the University of Michigan. She then moved to Stockholm and obtained her Ph.D. at the Karolinska Insitute. After a post-doc at Institute Pasteur, Paris and CNRS-INSERM, Montpellier she established her laboratory at the Karolinska Institute and became professor in 2001.
The clockwork of the cochlea
This lecture is based on our discovery showing that the peripheral auditory system, the cochlea, is regulated by a molecular circadian clock, which opened an exceptional opportunity for understanding unique features of the auditory system that were previously unknown. We have found, in the mouse, that the same noise exposure causes greater physiological and morphological consequences during nighttime compared to daytime exposures. Consequently, a robust molecular circadian clock machinery including the circadian genes Per1, Per2, Bmal1, and Rev-Erb, was identified in the cochlea and was found to regulate this differential sensitivity to day or night noise exposure. Using RNAseq we recently identified 7211 genes in the cochlea that have circadian expression and a large proportion of them regulate cell signaling, hormone secretion, and inflammation. Nearly ⅔ of these genes show maximal expression at nighttime, a finding which can only be captured when performing analyses around the clock. Why is this important? A “broken” clock may enhance the risk for developing hearing loss, as it has been shown for a wide variety of diseases including metabolic, cardiovascular, neoplastic and inflammatory disorders. However, before investigating the consequences of clock disruption on auditory functions, a better understanding of the circadian components that characterize the auditory system are needed.
University of Michigan, USA
Rick Altschuler received his Ph.D. in Anatomy at the University of Minnesota in 1978 and then moved to the Lab of Neuro-Otolaryngology at NINCDS at the National Institutes of Health where he studied neurotransmitters and receptors of the cochlea and cochlear nucleus. He joined Kresge Hearing Research Institute at the University of Michigan in 1985 where he is now a Professor in the Departments of Otolaryngology and Cell and Development Biology. He also has an appointment at the VA Ann Arbor Healthcare System. He is currently studying noise induced and age-related hearing loss and vestibular dysfunction, tinnitus and mechanism based therapeutic interventions for prevention and treatment.
Noise induced cochlear hair cell loss, synaptopathy and tinnitus: Mechanisms and strategies for prevention and repair
Noise overstimulation can lead to temporary and/or permanent shifts in auditory brain stem response (ABR) measures and can lead to changes in Gap Detection and/or behavioral responses indicative of tinnitus in animal models. Losses in hair cell function(s) or loss of hair cells is linked to ABR temporary or permanent threshold shifts respectively. Loss of inner hair cell-auditory nerve connections (synaptopathy) can be associated with changes in ABR suprathesholds and loss of dynamic range. There are successful strategies for prevention of noise-induced hair cell dysfunction and/or loss including anti-oxidant treatment. Post-noise treatments can reduce further loss in hair cells that have not entered cell death cycles, however there are not yet any successful treatments to replace lost hair cells. There are successful strategies to prevent noise-induced cochlear synaptopathy such as use of anti-excitotoxicity agents as well as successful strategies to induce re-connection of lost synapses including use of neurotrophins. While noise-induced effects in the cochlea are considered inducing agents for the progression of events leading to tinnitus, the specific necessary or sufficient cochlear changes have yet to be firmly identified. Our studies are testing synaptopathy as an inducer for noise-induced tinnitus in the rat model (using broad band and small arms fire like noises) and determining if prevention or repair will influence the progression and reduce the incidence of tinnitus.
CNRS/University of Toulouse, France
Pascal Barone is the team leader of C3P (Crossmodal Compensation and Cortical Plasticity) a team dedicated to understanding mechanisms of cortical plasticity in normal subjects and deaf patients. My early works exploring the neuronal mechanisms of prenatal axogenesis have clearly demonstrated the high specificity of the cortical connectivity during early development, a result that have a strong theoretical influence in understanding the development of sensory functions and the impact of early sensory deprivation on brain reorganization. Based on a tied collaboration with the ETN department at the Purpan hospital in Toulouse, I presently conduct a multidisciplinary approach to better understand the neuronal mechanisms of brain plasticity in deafness in animal models and in humans. Our work is based on a multidisciplinary approach at both a fundamental and a clinical level, it encompasses behavioral and brain imaging (PET) studies in patients as well as in normal hearing subjects. Because the success of rehabilitations relies on the functional plasticity in the auditory system, our work is aimed at understanding the reorganization of the cortical network involved in auditory processing that occurs during deafness and following the progressive recovery through a cochlear implantation or with hearing aids. Our complementary projects aim to a better understanding of hearing restoration coupled to the evaluation of rehabilitation strategies.
Functional segregation in the auditory cortex: evidence from brain reorganization in unilateral deaf patients
In patients with unilateral hearing loss (UHLp), binaural processing is obviously disrupted and spatial localization of the sound source is impaired as well as the ability in understanding speech in noisy environments. At the brain level, a limited number of studies have explored the functional reorganisation that occurs in the adult after a unilateral deafness. We conducted an original study aimed at investigating in UHLp the relationships between the severity of unilateral hearing loss, the resulting deficit in binaural processing and the extent of cortical reorganisation across the auditory areas.
We have recruited 14 UHL patients (hearing loss 37-120 dB HL) and aged-matched hearing controls. All subjects were evaluated for free-field sound localization abilities and speech in noise comprehension (French Matrix test). All subjects went through a fMRI protocol to evaluate the activation pattern across auditory areas during a natural sounds discrimination task. First, brain imaging analysis clearly demonstrated that in non-primary areas (NPAC), UHL induces a shift toward an ipsilateral aural dominance. Such reorganization, absent in the PAC, is correlated to the hearing loss severity and to lower spatial localization ability performances. Second, a regression analysis between brain activity and patient’s performances, clearly demonstrated a link between the sound localisation deficit and a functional alteration that impacts specifically the posterior auditory areas known to process spatial hearing. On the contrary, the core of the auditory cortex appeared relatively preserved and maintains its normal implication in processing non-spatial acoustical information.
Altogether our study adds further evidences of a functional dissociation in the auditory system and shows that binaural deficits induced by UHL affect predominantly the dorsal auditory stream.
University of Michigan, USA
Susan Shore. Ph.D. Is a Professor of Otolaryngology, Molecular and Integrative Physiology and Biomedical Engineering at the University of Michigan where she also holds the Joseph Hawkins Collegiate Research Professorship. Dr. Shore has spent over two decades at the Kresge Hearing Research Institute at University of Michigan, where her research has focused on the contributions of multisensory systems to auditory brain circuits in normal and noise-damaged systems. Currently, the translation of these basic science discoveries is focused on human clinical trials testing a novel, multisensory tinnitus treatment. Dr. Shore has received numerous awards for her research including the Lydia Adams de Witt award for Women in Science and Engineering. As principal investigator of a long-standing NIH funded training grant, Susan is also committed to training the next generation of sensory scientists. In her spare time, she is a performance ballroom dancer.
Cochlear damage may be necessary but not sufficient to induce tinnitus
While hearing loss is associated with tinnitus, the relationship is not causal as people without audiometric hearing loss can develop tinnitus and not all people with audiometric hearing loss develop tinnitus. Likewise, in animal models, noise exposures that produce only temporary threshold shifts result in behavioral evidence of tinnitus only in about half of the exposed animals. Following tinnitus-induction using narrow band noise exposures, we have demonstrated that dorsal cochlear nucleus fusiform cells show BF-specific alterations in stimulus-timing dependent plasticity, increased spontaneous synchronization, and increased spontaneous firing rates (SFR) in animals with behavioral evidence of tinnitus [1, 2]. Conversely, animals without plasticity-induced changes in fusiform cells, but equivalent degrees of cochlear damage, did not show evidence of tinnitus. Similarly, bushy cells in ventral cochlear nucleus showed BF-restricted increased SFR and cross-unit synchrony in animals with behavioral evidence of tinnitus but not in those without such behavioral evidence, even though ABR thresholds and suprathreshold wave-1 amplitudes were equivalent , suggesting similar cochlear damage in the two groups. Changes in glutamatergic and cholinergic transmission were also changed in these animals in a tinnitus-specific manner [4, 5].
Conclusions: These results suggest that hearing loss, whether visible or ‘hidden’, is insufficient by itself to produce a tinnitus phenotype. Changes in cochlear output after noise exposure require accompanying plastic changes in recipient neurons in the CNS in order to result in the physiological and behavioral signatures of tinnitus.
 C. Wu, D.T. Martel, S.E. Shore, Increased Synchrony and Bursting of Dorsal Cochlear Nucleus Fusiform Cells Correlate with Tinnitus, J Neurosci, 36 (2016) 2068-2073.
 K.L. Marks, D.T. Martel, C. Wu, G.J. Basura, L.E. Roberts, K.C. Schvartz-Leyzac, S.E. Shore, Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans, Sci Transl Med, 10 (2018).
 D. Martel, S. Shore, Ventral cochlear nucleus bushy cells contribute to enhanced auditory brainstem response amplitudes in tinnitus., ARO, 2019, DOI (2019).
 A.N. Heeringa, C. Wu, C. Chung, M. West, D. Martel, L. Liberman, M.C. Liberman, S.E. Shore, Glutamatergic Projections to the Cochlear Nucleus are Redistributed in Tinnitus, Neuroscience, DOI 10.1016/j.neuroscience.2018.09.008(2018).
 L. Zhang, C. Wu, D.T. Martel, M. West, M.A. Sutton, S.E. Shore, Remodeling of cholinergic input to the hippocampus after noise exposure and tinnitus induction in Guinea pigs, Hippocampus, DOI 10.1002/hipo.23058(2018).
Benjamin D. Auerbach
University at Buffalo, USA
Dr. Benjamin D. Auerbach graduated from Cornell University with a Bachelor’s in Biological Sciences and received his Ph.D. in Neuroscience from the Massachusetts Institute of Technology. He is currently a Research Assistant Professor at the Center for Hearing and Deafness at the University at Buffalo. Dr. Auerbach’s research interests include auditory plasticity, hyperacusis, and autism spectrum disorders.
Comparing auditory circuit disruptions across diverse models of hyperacusis
Hyperacusis is a complex hearing disorder that encompasses a wide-range of reactions to sound, including excessive loudness, increased aversion/fear of sound, or even pain. While often associated with hearing loss and tinnitus, sound tolerance disturbances are actually observed across a broad spectrum of neurological disorders. Thus, hyperacusis is diverse in both its etiology and phenotypic expression, and it is imperative to consider this diversity when attempting to elucidate its physiological mechanisms. Here we will describe a series of recent studies utilizing novel behavioral paradigms aimed at distinguishing between the diverse ways in which sound perception may be altered in hyperacusis. We have combined these novel assays with acute and chronic in vivo electrophysiological recordings to examine the neurophysiological correlates of hyperacusis using three distinct models: salicylate-induced ototoxicity; noise-induced hearing loss; and an Fmr1 KO rat model of Fragile X syndrome, a leading inherited form of autism that consistently presents with auditory hypersensitivity. This multifaceted approach allows us to determine if different forms of hyperacusis are mechanistically distinct disorders with overlapping presentation, or if they share a common/convergent pathophysiological mechanism.
Enrique A. Lopez-Poveda
University of Salamanca, Spain
Enrique A. Lopez-Poveda, Ph.D. (born 1970) is Associate Professor of Otorhinolaryngology at the University of Salamanca, and the Director of the Auditory Computation and Psychoacoustics Laboratory of the Neuroscience Institute of Castilla y León (since 2003), and the Director of the Audiology Diploma of the University of Salamanca (since 2006). He received a B.Sc. in physics from the University of Salamanca in 1993 and a Ph.D. in hearing sciences from Loughborough University in 1996. His current research interests include (1) understanding and modeling cochlear compression; (2) understanding the roles of olivocochlear efferents in hearing; (3) reinstating the effects and benefits of olivocochlear efferents to the users of hearing aids and cochlear implants; and (4) understanding the factors behind the wide variability in outcomes across hearing-aid and cochlear-implant users. He has authored over 75 papers, one book and three patents on a variety of topics in hearing science. He is (or has been) editor of two books, a member of the editorial board of Trends in Hearing (since 2014), and an associate editor of Journal of the Acoustical Society of America (2012-2015). He was elected Fellow of the Acoustical Society of America in 2009, and of the International Collegium of Rehabilitative Audiology in 2015.
On the role of the medial olivocochlear reflex in adaptation to noise
Sensory systems constantly adapt their responses to the current environment. In hearing, adaptation may facilitate communication in noisy settings, a benefit frequently (but controversially) attributed to the medial olivocochlear reflex (MOCR) enhancing the neural representation of speech. Here, I will review our efforts towards elucidating this potential role of the MOCR by using cochlear implants. We found that the sensitivity to amplitude modulation and the recognition of speech in noise improve over time similarly for CI users and for normal-hearing listeners. Because the electrical stimulation delivered by cochlear implants is independent from the MOCR, this demonstrates that noise adaptation does not require the MOCR. On the other hand, we also found that cochlear implant users show better speech-in-noise intelligibility with a binaural cochlear-implant sound coding strategy inspired by the contralateral MOCR than without it. Combined, the evidence suggests that the MOCR can produce noise adaptation but compensatory mechanisms can produce as much noise adaptation as the MOCR when the MOCR is absent.
University of Minnesota, USA
Magdalena Wojtczak received her education from Adam Mickiewicz University in Poznan, Poland. She obtained her M.Sc. degree in Physics, and her Ph.D. degree in Physics with the specialty in Acoustics (in 1996). After that, she worked as a Postdoctoral Associate, and then Research Associate in Psychoacoustics Lab led by Neil Viemiester at the University of Minnesota. Currently, she is a Research Associate Professor in Auditory Perception and Cognition Lab at the University of Minnesota. Her research interests are in differences in peripheral auditory processing between ears with normal hearing and sensorineural hearing loss. In particular, she is working towards linking specific perceptual deficits to physiological mechanisms. She was elected a Fellow of the Acoustical Society of America in 2011.
Tinnitus – its relation to hearing loss and cochlear synaptopathy
Tinnitus is characterized by persistent sound perception in the absence of a sound source. The phantom sound is thought to result from maladaptive neural gain compensating for the lack of peripheral input due to hearing loss. However, not all hearing-impaired individuals experience tinnitus and the percept is sometimes observed in individuals with clinically normal hearing. Although it is widely known that peripheral input deprivation is, by itself, insufficient for the development of tinnitus, it likely provides the initial trigger that in combination with changes to central auditory and some non-auditory areas of the brain leads to the phantom sound perception. In individuals with normal hearing, tinnitus may result from loss of synaptic connections between inner hair cells in the cochlea and auditory nerve fibers. Establishing the link between tinnitus in otherwise normal hearing and cochlear synaptopathy is challenging since measures that are affected by synaptopathy in animals do not show consistent patterns that can be related to tinnitus or noise exposure in humans. In fact, there is no established proxy measure for cochlear synaptopathy in humans and its prevalence and perceptual consequences are unknown. Finding reliable measure or a battery of tests is desirable as cochlear synaptopathy has been shown to accelerate loss of hearing sensitivity in laboratory animals. Establishing a link between tinnitus and cochlear synaptopathy, in the absence of clinical hearing loss, could accelerate the development and testing of effective treatments for cochlear synaptopathy in humans. This work is supported by NIH grant R01 DC015987.
University of Pittsburgh, USA
Thanos Tzounopoulos, PhD is a Professor and Vice Chair of Research in the Department of Otolaryngology and the UPMC Endowed Professor of Auditory Physiology at the University of Pittsburgh School of Medicine. He also serves as the Director of the Pittsburgh Hearing Research Center. His research is focused on molecular, cellular, and systems mechanisms underlying normal and pathological auditory processing. He also studies tinnitus and its underlying cellular and molecular mechanisms. More recently, his research has expanded to drug discovery and development for tinnitus. Dr. Tzounopoulos earned his PhD in Molecular and Medical Genetics at Oregon Health and Science University as a Fulbright Scholar and completed postdoctoral training at the University of California at San Francisco and the Oregon Hearing Research Center.
The role of synaptically released zinc in hearing and hearing loss
In my talk, I will discuss the mechanisms via which synaptically released zinc fine tunes synaptic transmission, sound processing, and central adaptations to noise-induced hearing loss.
University of Toronto, Canada
Karen Gordon, PhD, is a Professor in the Department of Otolaryngology and a Graduate Faculty Member in the Institute of Medical Science at the University of Toronto. She works at the Hospital for Sick Children in Toronto, Ontario, Canada, as a Senior Scientist in the Research Institute and an Audiologist in the Department of Communication Disorders. She is Director of Research in Archie’s Cochlear Implant Laboratory and holds the Bastable-Potts Health Clinician Scientist Award in Hearing Impairment and Cochlear Americas Chair of Auditory Development. Karen’s research focuses on auditory development in children who are deaf and use auditory prostheses including cochlear implants. Her work is supported by research funding from the Canadian Institutes of Health Research along with the Cochlear Americas Chair in Auditory Development and generous donations.
Should children with single sided deafness receive a cochlear implant?
We are studying whether children with profound deafness in one ear and normal hearing in the other ear (ie. single sided deafness (SSD)) can benefit from cochlear implantation. Leaving these children’s hearing loss untreated puts them at risk for social, educational and emotional deficits and, over time, allows an aural preference to develop, weakening the potential for bilateral/spatial hearing development. Concurrent vestibular and balance impairments further compromise these children’s access to spatial information. Consequences to academic skills and working memory will be discussed. Of the available treatment options, cochlear implantation provides the best method for providing auditory input to a deaf ear but is not presently considered to be the clinical standard of care in children with SSD and is not suitable in all cases. On the other hand, cochlear implantation could have a particular role in children whose SSD is associated with congenital cytomegalovirus and, when provided with limited delay, is well tolerated as measured by consistent device use. Early outcomes also indicate a reversal of aural preference as input from the cochlear implant restores representation of the previously deprived ear to the auditory brain. We continue to monitor children with SSD who receive cochlear implants to define longer term effects of this intervention on developing auditory and vestibular/balance function.
Carnegie Mellon University, USA
Barbara Shinn-Cunningham joined Carnegie Mellon University in September of 2018 to head the new Carnegie Mellon Neuroscience Institute. She is an MIT-trained electrical engineer turned neuroscientist who uses behavioral, neuroimaging, and computational methods to understand auditory processing and perception. Her interests span from sensory coding in the cochlea to brain networks controlling executive function and their influence on auditory perception (and everything in between). Prior to joining CMU, she served more than two decades on the faculty of Boston University. In her copious spare time, she competes in saber fencing and plays the oboe / English horn. She received the 2019 Helmholtz-Rayleigh Interdisciplinary Silver Medal and the 2013 Mentorship Award, both from the Acoustical Society of America (ASA). She is a Fellow of the ASA and of the American Institute for Medical and Biological Engineers, a lifetime National Associate of the National Research Council, and a recipient of fellowships from the Alfred P. Sloan Foundation, the Whitaker Foundation, and the Vannevar Bush Fellows program.
Cochlear synaptopathy in human listeners?
Many listeners with normal hearing thresholds nonetheless have trouble understanding speech in noisy settings— especially if they are middle-aged or older. Recent advances, driven by studies in noise-exposed and aging animals, suggests that such difficulties can come about when cochlear function is healthy, but there is a loss of auditory nerve fibers conveying information to the brain. This talk will review the evidence that such loss arises in human listeners, and explore how this kind of subtle hearing loss might influence perception in everyday settings.
University of Iowa, USA
Phillip Gander is an assistant research scientist in the Department of Neurosurgery and the Department of Otolaryngology at The University of Iowa. He conducts research using electrocorticography (ECoG) in the Human Brain Research Laboratory of Matt Howard, MD, and using neuroimaging (PET, EEG) in the Iowa Cochlear Implant Clinical Research Center. With the unique opportunities afforded by both research environments he investigates questions related to auditory object processing in collaboration with Tim Griffiths, MD, Newcastle University. He previously worked as a research fellow at the National Biomedical Research Unit in Hearing, Nottingham, UK with Deb Hall. Phillip received his PhD in Psychology, Neuroscience, and Behaviour in 2009 from McMaster University, Hamilton, ON, where he worked with Larry Roberts and Laurel Trainor.
Phillip’s research focus is auditory cognition from the perspective of cognitive neuroscience. Using psychophysics and neuroimaging he studies how the auditory system forms perceptual representations and the factors that contribute to their formation including learning, memory, and attention, under normal conditions and when they are disordered (e.g., hearing loss, cochlear implants, and tinnitus). In addition to investigating the brain bases of sound processing he places a strong emphasis on translating basic scientific findings into benefits for patients.
Human intracranial recordings during tinnitus perceptual change
Advances are being made regarding putative neural mechanisms for tinnitus within animal models, however difficulty remains regarding the extent these models relate to factors that are relevant to the human experience of tinnitus. These limitations include neurophysiological correlates, changes in perceptual strength, degree of distress, and amount of impact on cognition and quality of life. An important step in the utility of animal models is to find similarities among these characteristics to the human experience of tinnitus. The most tractable among them is the category of neurophysiological correlates, unfortunately, clear patterns in measures of human brain activity related to tinnitus remain elusive. The work outlined in this presentation covers recent investigations of intracranial EEG in medically refractory epilepsy patients. Results from two patients are described measured during a perceptual manipulation of tinnitus using a 30s white noise residual inhibition paradigm. Wide spread activity throughout the brain was found during a change in tinnitus intensity, along with focal cross-frequency activity changes, which are proposed as hubs for oscillatory coupling of activity related to distinct functions of a broader tinnitus network. The results align with models of tinnitus activity generated from human non-invasive recordings. In one patient, stimulation of Heschl’s gyrus was possible to explore the potential for perceptual modulation of tinnitus. After stimulation, effects similar to residual inhibition were described by the patient. Importantly the patient reported no change in hearing function during stimulation, which challenges the idea that tinnitus has functional equivalence to normal auditory perception.
UCL Ear Institute, UK
Roland Schaette is a Reader in Computational Auditory Neuroscience at the UCL Ear Institute, where he has been working since 2008. Before joining the Ear Institute, Roland did his PhD in Theoretical Biophysics at Humboldt Universität zu Berlin under the supervision of Richard Kempter.
The main goal of Roland’s research is to understand how hearing loss affects neuronal processing in the auditory system, with two main research topics (one could also call them the real-world applications of the basic research interest): how hearing loss could lead to the development of tinnitus and how hearing loss impairs the understanding speech in noise.
His work on computational models of tinnitus has provided a framework for understanding how neuronal activity stabilization after hearing loss could lead to increased neuronal response gain and neuronal hyperactivity, which could underlie the development of tinnitus. Together with David McAlpine, he was one of the first to investigate hidden hearing loss in tinnitus patients with a normal audiogram.
Hidden hearing loss impacts the neural representation of speech in background noise
The ability to follow a conversation in high levels of background noise, referred to as ‘cocktail party’ listening, is critical to human communication. However, despite seemingly normal hearing abilities, many individuals struggle to understand speech in noisy backgrounds. Here, we investigated the representation of speech sounds by neurons in the auditory midbrain (inferior colliculus, IC) of gerbils subjected to a mild acoustic noise insult (octave-band noise, 2-4 kHz, 105 dB SPL) that only caused a temporary hearing threshold shift.
At high sound levels (75 dB SPL), neural discrimination performance of vowel-consonant-vowel (VCV) speech tokens was significantly lower for neurograms obtained from noise-exposed compared to control animals. Impaired performance was significant correlated with spectral characteristics of the different VCVs; discrimination was most impaired for VCVs with relatively greater spectral energy within or above the spectrum of the band of damaging noise. In contrast, at moderate sound levels (60 dB SPL), recordings from exposed animals yielded significantly better neurogram-based VCV discrimination than those from control animals, and this improvement was unrelated to relative spectral content. Human listeners, presented with representations of the VCVs sonified from the neural data, experienced a significant reduction in intelligibility of speech reconstructed from neurograms from noise-exposed animals compared to that from controls, and this reduction was greater at 75 than at 60 dB SPL. The data are consistent with the conclusion that noise-induced damage to high-threshold ANFs elicits a deficit in neural coding of speech-in-noise at high sound intensities, as well as a compensatory increase in response gain in the central auditory system that renders speech-in-noise more intelligible at moderate intensities.
University of Colorado Boulder, USA
Anu Sharma, PhD is a Professor in the Dept. of Speech Language and Hearing, Institute for Cognitive Science and Center for Neuroscience at University of Colorado at Boulder and adjunct professor in the Department of Otolaryngology and Audiology at the University of Colorado at Denver Medical School. Her research is focused on examining neuroplasticity in hearing loss and is funded by the National Institutes of Health.
Cortical neuroplasticity and cognitive decline in age-related hearing loss
A basic tenet of neuroplasticity is that the brain will re-organize following sensory deprivation. A better understanding of cortical neuroplasticity accompanying hearing loss may allow us to improve the design of hearing devices, allowing accommodation of altered cortical processing. Compensation for the deleterious effects of hearing loss include recruitment of alternative brain networks during cortical processing. Our experiments suggest that hearing loss ranging from mild-moderate to severe-profound, results in significant changes in neural resource allocation, reflecting patterns of cross-modal compensation from the visual and somatosensory modalities, increased listening effort, and decreased cognitive spare capacity. Furthermore, cortical changes are related to cognitive decline and decreased speech perception in noise. Interestingly, many cognitive and cortical changes may be reversed after use of appropriately fitted hearing aids. Our results suggest that compensatory plasticity influences outcomes for patients with age-related hearing loss.
McMaster University, Canada
Larry Roberts (PhD, University of Minnesota) is Professor Emeritus in the Department of Psychology, Neuroscience, and Behavior at McMaster University. Roberts has studied neural plasticity in the human auditory system using EEG and MEG, laboratory auditory training, and musicians as models. In 2002 he organized a consortium of five laboratories in Canada and two abroad which combined their expertise to investigate the neural basis of tinnitus. Their findings and those of many other laboratories indicate that neuroplastic changes triggered by hearing loss are involved in generation of chronic tinnitus percepts. Roberts has held Guest Professorships at the University of Tuebingen (Germany) and the Humboldt University (Berlin), and from 2003-2008 directed the MEG laboratory of the Down Syndrome Research Foundation in Vancouver (affiliated with Simon Fraser University). His research has been supported by the Canadian Institutes for Health Research, the Natural Sciences and Engineering Research Council of Canada, the American Tinnitus Association, and the Tinnitus Research Initiative. Presently he serves as a member of the Scientific Advisory Committee of the American Tinnitus Association.
Neural plasticity and its initiating conditions in tinnitus
I will discuss evidence from several laboratories which suggests a highly provisional understanding of the mechanisms that give rise to tinnitus associated with hearing loss. Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the system. Forms of homeostatic plasticity appear to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased synchronous firing among the affected neurons, appears to be forged by spike-timing-dependent plasticity operating in auditory pathways. This form of plasticity may normally be constrained by feedforward inhibition arising from the spontaneous activity of auditory nerve fibers in quiet but may be unleashed when this input is reduced or lost. Slow oscillations recorded from the cortex modulate or reflect the integration of tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness, which are known to exhibit increased metabolic activity in tinnitus patients. Such oscillations may be a feature of normal auditory processing but persist in tinnitus, driven by phantom signals from the auditory pathway. New sound therapies attempt to suppress tinnitus-related neural activity through plasticity, but repeated sessions may be needed to prevent the return of tinnitus owing to deafferentation as its initiating condition.
Northwestern University, USA
Jaime García-Añoveros, PhD, is a professor of Anesthesiology, Physiology and neurology at Northwestern University and a fellow at the Hugh Knowles Center for Clinical and Basic Science in Hearing and Its Disorders. He obtained his BS from UC Berkeley and his PhD from Columbia University, followed by a postdoctoral appointment at Harvard Medical School and the Massachusetts General Hospital, prior to joining the faculty at Northwestern. His research has largely consisted in the identification and characterization of genes, ion channels and transcription factors involved in sensory organ function, formation and degeneration, with an emphasis on pain and hearing. This resulted in a macromolecular model for touch mechanotransduction, the identification and characterization of transduction channels for touch and pain, of degeneration-causing mutations in somatosensory neurons and hair cells (the latter explaining various forms of deafness), of specialized lysosomes in cochlear hair cells and presbycusis, and of transcription factors in developing olfactory and auditory neurons and hair cells. The developmental studies revealed a molecular mechanism by which separate cochlear outer and inner hair cells are formed. The combined study of pain and hearing led to the emerging field of auditory nociception.
Loud and/or persistent noise damages the cells of the organ of Corti within the cochlea, among which the outer hair cells (OHCs) are particularly vulnerable. Throughout most of the body, nociceptive neurons of the dorsal root and trigeminal ganglia detect this kind of tissue damage. However, while a few somatosensory nociceptors from trigeminal ganglia innervate cochlear vessels, they do not innervate the organ of Corti. This brings the question of whether noise-induced damage is undetected or whether the ear has its own nociceptive neurons. The organ of Corti receives innervations by only two types of afferent neurons, both of which are in the cochlear spiral ganglia. Most (~95%) of these are type I afferents, which contact inner hair cells and get stimulated when these release glutamate. This represents the canonical auditory pathway by which sound information is thought to be transmitted from the cochlea to the brain. The other afferents, type II, send processes that extend and branch under the OHCs. Recordings of type II afferents revealed no activation by sound, so their function is unclear. We found that, blocking the canonical auditory pathway with a mutant in which IHCs do not release glutamate, sound stimulation could still activate neurons in the cochlear nucleus, but only if of an intensity that damages the organ of Corti and kills OHCs. This reveals a form of communication from cochlea to brain different from that provided by the canonical auditory pathway. This communication is most likely carried by type II afferents, which in many other respects reassemble somatosensory nociceptors. This represents a novel form of sensation, a hybrid of pain and hearing that we termed auditory nociception. We further propose that type II afferents may act as auditory nociceptors. Sensitization of such a pain-like system in in the inner ear might account for the pathological sensation of pain hyperacusis often reported by individuals with a history of noise trauma.
Hebrew University, Jerusalem
Amir Amedi is the Director of The make SENSE Center for Brain imaging, Rehab and Augmentation of the SENSES. He is a Professor at the Department of Medical Neurobiology at the Hebrew University, PhD in Computational Neuroscience (ICNC, Hebrew University) and Postdoctoral and Instructor of Neurology (Harvard Medical School). He is recipient of The Krill Prize for Excellence in Scientific Research, the Wolf Foundation (2011), the international Human Frontiers Science Program Organization Career Development award (2009), the JSMF Scholar Award in Understanding Human Cognition (2011). He received 2 consecutive ERC grants (www.BrainVisionRehab.com 2013-2018; ExperieSENSE 2018-2023). He is an internationally acclaimed brain scientist with 15 years of experience in the field of brain neuroplasticity and multisensory integration. In 2017 he founded www.ReNewSenses.com where he is engaged in developing novel Sensory substitution Device and AI algorithms to help the visually and hearing impaired.
How technology, life experiences and imagination shapes brain specialization
(“The best technologies make the invisible visible.” -Beau Lotto). My lab studies the principles driving specializations in the human brain and their dependence on specific experiences during development (i.e. critical/sensitive periods) versus learning in the adult brain. I will cover the work done under our www.BrainVisionRehab ERC project which focuses on studying Nature vs. Nurture factors in shaping up category selectivity in the human brain. A key part of the project involves the use of Sensory-Substitution-Devices (SSD). I will focus on work with the EyeMusic algorithm developed in my lab which convert invisible visual input to blind using music and sound. In the second part of the talk I will cover speech to touch sensory substitution approach which improve performace of hearing impaired in noisy environments. From basic science perspective the most intriguing results came from studying blind without any visual experience using SSDs to understand online visual feed arriving from a video camera. Specifically, I will discuss work aiming at unraveling the properties driving the sensory brain organization and at uncovering the extent to which specific unisensory experiences during critical periods are essential for the development of the natural sensory specializations. Our work focused on two fundamental discoveries: 1- Using the congenitally blind adult brain as a working model of a brain developing without any visual experience, we documented that essentially most if not all higher-order ‘visual’ cortices can maintain their anatomically consistent category-selectivity (e.g., for body shapes, letters, numbers and even faces) even if the input is provided by an atypical sensory modality learned in adulthood. We also found that such task-specific sensory-independent specializations can emerge as fast as after a few hours of training. Our work strongly encourages a paradigm shift in the conceptualization of our sensory brain by suggesting that visual experience during critical periods is not necessary to develop anatomically consistent specializations in higher-order ‘visual’ or ‘auditory’ regions. This also have implications to rehabilitation by suggesting that multisensory rather than unisensory training might be more effective. I will also discuss initial results from our new ERC ExperieSense project which focuses on studying Nature vs. Nurture factors in shaping topographical maps in the brain. In this project we focus on transmitting invisible topographical information to individuals with sensory deprivation but also augmented topographical information to normally sighted by using similar training and SSD protocols to couple it with input from ‘invisible’ sensors (like infrared or ultrasound images) and testing whether novel topographical representations can emerge in the adult brain to input that was never experienced during development (or evolution).
(See also Amedi et al. Task Selectivity as a Comprehensive Principle for Brain Organization. Trends in Cognitive Sciences 2017).
Purdue University, USA
Michael G. Heinz is a Professor at Purdue University, with a joint appointment in Speech, Language and Hearing Sciences and Biomedical Engineering. He received an Sc.B. degree in Electrical Engineering from Brown University in 1992. He then completed a Masters in Electrical and Computer Engineering at Johns Hopkins University in 1994. In 2000, he received a Ph.D. from the MIT Division of Health Sciences and Technology in the area of Speech and Hearing Sciences. His post-doctoral work was in Biomedical Engineering at the Johns Hopkins University School of Medicine. In 2005, he joined the faculty at Purdue as an Assistant Professor, where his NIH-funded lab has been investigating the relation between neurophysiological and perceptual responses to sound with normal and impaired hearing through the coordinated use of neurophysiology, computational modeling, and psychoacoustics. In 2010, he was elected a Fellow of the Acoustical Society of America (ASA), and served as Chair of the ASA Technical Committee on Psychological and Physiological Acoustics from 2011-2014. He currently serves as the Co-Director of an NIH-funded (T32) Interdisciplinary Training Program in Auditory Neuroscience. He also serves as an Associate Editor for the Journal of the Association for Research in Otolaryngology (JARO).
Physiological and behavioral assays of cochlear synaptopathy in chinchillas
Moderate-level noise exposure can eliminate cochlear synapses without permanently damaging hair cells or elevating auditory thresholds in animals. Cochlear synaptopathy has been hypothesized to contribute to human perceptual difficulties in noise that can be observed even with normal audiograms. However, it is difficult to test this hypothesis because of 1) ethical limits in measuring human synaptopathy directly, and 2) synaptopathy has been most completely characterized in rodent models for which behavioral measures at speech frequencies are challenging. We recently established a relevant mammalian behavioral model by showing that chinchillas have corresponding neural and behavioral amplitude-modulation (AM) detection thresholds in line with human thresholds. Furthermore, immunofluorescence histology confirmed synaptopathy occurs in chinchillas across a broad frequency range, including speech frequencies, following a lower-frequency noise exposure that avoids permanent changes in ABR thresholds and DPOAE amplitudes. Auditory-nerve fiber responses showed that low-SR fibers were reduced in percentage (but not eliminated) following noise exposure, as in guinea pigs. Non-invasive wideband middle‐ear muscle-reflex (MEMR) assays in awake chinchillas showed large and consistent reductions in suprathreshold amplitudes following noise exposure, whereas suprathreshold ABR wave-1 amplitude reductions were less consistent. The relative diagnostic strengths of MEMR and ABR assays were consistent with parallel studies of noise-exposed and middle-aged humans. Behavioral assays of tonal-carrier AM detection in chinchillas before and after noise exposure found no significant performance degradation, suggesting more complex stimuli that provide a greater challenge to population neural coding may be required. These anatomical, physiological, and behavioral data illustrate a valuable animal model for linking physiological and perceptual effects of hearing loss. Funding: R01DC009838 (Heinz) and NIH R01DC015989 (Bharadwaj).
Jean-Luc Puel started his scientific career in Montpellier in 1983 in the Inserm’s laboratory U254 (Laboratoire Neurobiologie de l’Audition) directed by Professor Rémy Pujol. In 1986, he defended his PhD thesis “frequency selectivity in rats during development and after ototoxic drugs administration” and joined Professor Richard P. Bobbin’s laboratory (Department of Otorhinolaryngology, Louisiana State University, New-Orleans) as a post-doctoral fellow. During his postdoctoral training, he developed several research programs on the pharmacology of the cochlea. In 1989, he came back to France and was appointed as a researcher by the CNRS to develop pharmacological therapies of the inner ear. In 1998, Jean-Luc Puel became Director of Research at the CNRS. Later on, he obtained a position as Professor of Neuroscience at the University of Montpellier 1 and became director of the “Audiology Research Center” in 2001. During this period, he actively participated to the creation of the Institute of Neurosciences of Montpellier, in which he managed the hearing team”. In 2011, he became Director of this Institute (INM-Inserm U1051). Jean-Luc Puel’s research interests are focused on the normal and pathological functioning of the inner ear. To reach this goal, he develops i) animals models that recapitulate human auditory deficits, ii) new diagnostic tools for screening auditory disorders and iii) rescue strategies from basic research to translational studies.
Sound coding in the auditory nerve: from research to clinical diagnostics
Auditory nerve fibers (ANFs) convey acoustic information from the sensory cells to the brainstem using an elaborated neural code based on both spike timing and rate. As the stimulus tone frequency increases, time coding fades and ceases, resulting in high-frequency tone encoding relies mostly on the spike discharge rate. Here, we will recapitulate our recent single unit data from gerbil’s auditory nerve to highlight the most relevant mode of coding (spike timing versus spike rate) in tone-in-noise. We report that high-spontaneous rate (SR) fibers driven by low-frequency tones in noise are able to phase lock ~30 dB below the level that evoked a significant elevation of the discharge rate. For high-frequency tone, the low-threshold/high-SR fibers reach their maximum discharge rate in noise and do not respond to tones, whereas medium- and low-SR fibers are still able to respond to tones making them more resistant to background noise. Based on these findings, we first discuss the ecological function of the ANF distribution according to their spontaneous discharge rate. We furthermore point out the poor synchronization of the low-SR ANFs, accounting for the discrepancy between ANF number and the amplitude of the compound action potential of the auditory nerve. Finally, we proposed a new diagnostic tool to assess low-SR fibers, which does not rely on the onset response of the ANFs.