“From Cochlea to Cortex”
University of Tübingen, Germany
Dr. Marlies Knipper is the Head of Molecular Physiology of Hearing at the University of Tübingen, in the Department of Otorhinolaryngology, Head and Neck Surgery. Her area of specialization is research on causes of congenital deafness; clarifying the molecular relationship between deafness and hardness of hearing in newborns as a result of transient epigenetic disorders during pregnancy; research on tinnitus, age-induced and noise-induced hearing loss and neuropathies), and the relation of hearing and cognition. Dr. Knipper’s primary interest currently lies with the creation of an infrastructure platform for the more efficient use of cross-system (translational) research on the different senses.
Identification of functional biomarkers of tinnitus and tinnitus/hyperacusis in patients
Tinnitus is as a symptomatic malfunction of our hearing system, where phantom sounds are perceived without acoustic stimulation.
In recent years, we have developed a fingerprint for tinnitus and recently hyperacusis using a combination of behavior animal models for tinnitus/hyperacusis and electrophysiological as well as molecular approaches in the peripheral and central auditory system. The characteristic features that distinguished equally hearing impaired animals with and without tinnitus or hyperacusis are described (Möhrle et al., 2019, Hofmeier et al., 2018, Knipper et al 2013; Rüttiger et al 2013, Singer et al 2013). We aimed to test the knowledge for patients with tinnitus only and tinnitus with co-occurrence of hyperacusis.
Here we present a clinical pilot studies in hearing-impaired subjects with and without tinnitus in comparison to tinnitus with co-occurrence of hyperacusis. We use audiometric measurements, the analysis of body fluids, and functional magnetic resonance tomography (fMRI) analyzing evoked BOLD fMRI and resting state r-fcMRI.
The results in defined patient groups are discussed in the context of previous findings gained in animals.
This work was supported by the Deutsche Forschungsgemeinschaft DFG KN 316/10-1; RU 713/3-2 (FOR 2060; RU 316/12-1, KN 316/12-1 (SPP 1608); KN 316/13-1.
University of Western Ontario, Canada
Associate Professor, Dept. Anatomy & Cell Biology, Schulich School of Medicine & Dentistry, University of Western Ontario, Canada.
Over the past 10 years, my research has focused on basic questions investigating how the cortex integrates information from more than one sense (e.g., vision and hearing), as well as on clinically-relevant questions as to how the cortex adapts to hearing loss and its perceptual implications. In addressing these research themes, I have used numerous animal models (mice, rats, ferrets and cats) and a combination of techniques, including electrophysiological recordings from cortical neurons, as well as a variety of behavioural paradigms, ranging from reflexive tests of sensory-motor gating to perceptual judgment tasks requiring executive function. With respect to the neuroplasticity induced by hearing loss, my lab has taken a multi-faceted approach that ranges from in vitro investigations of the sensory cells in the inner ear, all the way up to studying cortical processing at the level of single neurons, local microcircuits and sensory perception. It remains a long-term goal of my research program to reveal the brain circuits and cellular mechanisms that contribute to the perceptual consequences commonly associated with hearing loss-induced brain plasticity.
Crossmodal Plasticity in Auditory, Visual and Multisensory Cortical Areas Following Noise-Induced Hearing Loss
Following hearing loss, crossmodal plasticity occurs whereby there is an increased responsiveness of neurons in the deprived auditory system to the remaining, intact senses (e.g., vision). Using electrophysiological recordings in noise-exposed rats, our recent studies have revealed that crossmodal plasticity is not restricted to the core auditory cortex; higher-order auditory regions as well as visual and audiovisual cortices show differential effects following noise-induced hearing loss. Unexpectedly, the cortical area showing the greatest relative degree of multisensory convergence post-noise exposure transitioned away from the normal audiovisual area toward a neighboring, predominantly auditory area. Thus, our collective results suggest that crossmodal plasticity induced by adult-onset hearing impairment manifests in higher-order cortical areas as a transition in the functional border of the audiovisual cortex. Our ongoing studies have begun to reveal the implications of this crossmodal plasticity on the rats’ ability to perceive the precise timing of audiovisual stimuli using novel behavioral tasks that are consistent with studies of perceptual judgement in humans.
Sunnybrook Research Institute, University of Toronto, Canada
Andrew Dimitrijevic is a scientist at the Sunnybrook Health Sciences Centre, Department of Otolarygology, Head and Neck Surgery, Sunnybrook Research Institute. He is also faculty at the University of Toronto, Departments of Otolarygology, Head and Neck Surgery, Institute of Medical Sciences, Program in Neuroscience.
Dr. Dimitrijevic completed his PhD at the University of Toronto under the supervision of Terry Picton. He went on to postdoctoral positions at the University of British Columbia under the supervision of David Stapells and University of California, Irvine under the supervision of Arnie Starr. Dr. Dimitrijevic was faculty at Cincinnati Children’s Hospital Medical Center before coming to Sunnybrook.
Dr. Dimitrijevic uses high density EEG recordings to understand sensory and cognitive aspects of hearing in both normal hearing and hearing impaired populations. Web site: http://www.cibrainlab.com/
University of Calgary, Canada
Born 1942. M.Sc in physics (1967), Ph.D. in biophysics (1972). Research Associate Department of Otorhinolaryngology at Leiden University in the Netherlands (1972-1978) interrupted from 1976-1977 as research fellow at the House Ear Institute in Los Angeles, California. !978-1986 professor in Biophysics, Nijmegen University Netherlands. 1986-2013 Professor in Psychology, Physiology and Pharmacology at the University of Calgary, Alberta, Canada. Alberta Heritage Foundation for Medical Research Scholar and Scientist (1986-2013). 1997-2013 Campbell McLaurin chair for Hearing Deficiencies. 2013-present Emeritus Professor at the University of Calgary.
Published >220 peer reviewed articles; ~ 100 book chapters and 6 single authored books and 4 edited books. Received >20,000 citations (Google Scholar), h-factor = 81.
- Elected corresponding member of the Royal Netherlands Academy of Arts and Sciences (1989).
- Elected Fellow of the Acoustical Society of America (1998).
- Elected Fellow of the Royal Society of Canada (2014)
- Editor-in-Chief of “Hearing Research” (2005-2010)
Hearing Loss and the Brain
Hearing loss is in in the ear, but hearing problems originate in the brain. This suggests that the loss of auditory neural activity that enters the central auditory system thereby alters it functioning. Specifically, hearing loss causes tonotopic map changes in thalamus and cortex, not at more peripheral subcortical structures, likely as a result of changes in the balance between excitation and inhibition, which may also cause central gain changes. Hearing loss is also known to increase spontaneous firing rates and neural synchrony in cochlear nucleus, midbrain, thalamus and auditory cortex, but also in non-classical auditory sensitive areas. Sever hearing loss, for instance at > 8 kHz, results in atrophy of part of auditory cortex, and also in prefrontal cortical areas related to executive functions. In addition to these changes, the ‘auditory connectome’ may be changed either directly through deafferentation, but also through increasing demands on cognitive processes such as attention and memory to make sense of the deteriorated acoustic signals resulting from hearing loss. These plastic changes can also result in tinnitus and hyperacusis, and potentially in advancing the onset of mild cognitive impairment.
Mass Eye & Ear, USA
Charles Liberman, Ph.D. is the Schuknecht Professor of Otology and Laryngology at the Harvard Medical School and the Director of the Eaton-Peabody Laboratories at the Massachusetts Eye and Ear Infirmary. Dr. Liberman received his B.A. in Biology from Harvard College in 1972 and his Ph.D. in Physiology from Harvard Medical School in 1976. He has been on the faculty at Harvard since 1979, has published over 180 papers on a variety of topics in auditory neuroscience and is the recipient of the Award of Merit from the Association for Research in Otolaryngology, the Carhart Award from the American Auditory Society and Bekesy Silver Medal from the Acoustical Society of America. His research interests include 1) coding of acoustic stimuli as neural responses in the auditory periphery, 2) efferent feedback control of the auditory periphery, 3) mechanisms underlying noise-induced and age-related hearing loss, 4) the signaling pathways mediating nerve survival in the inner ear and 5) application of cell- and drug-based therapies to the repair of a damaged inner ear.
University of Manchester, United Kingdom
Chris Plack was educated at the University of Cambridge, where he obtained a BA in 1987 and a PhD in 1990. He is currently Ellis Llwyd Jones Professor of Audiology at the University of Manchester, UK, and Professor of Auditory Neuroscience at Lancaster University, UK, where he studies the physiology and psychology of normal and impaired human hearing. In 2003, he was elected a Fellow of the Acoustical Society of America.
Cochlear Synaptopathy in Humans
The audiogram is easy to obtain, but is relatively insensitive to auditory damage, particularly damage to the auditory nervous system. Results from rodent and primate models suggest that noise exposure can cause a dramatic loss of connections between inner hair cells and auditory nerve fibers without affecting threshold sensitivity. This disorder has been called cochlear synaptopathy or “hidden hearing loss,” because it is not thought to be detectable using pure tone audiometry. If confirmed in humans, this disorder could have major implications for the prevention, diagnosis, and management of noise-induced hearing loss. However, the evidence for cochlear synaptopathy due to noise exposure in humans is mixed, and there is little evidence that synaptopathy causes listening difficulties in young adults with normal audiograms. It is possible that humans are less vulnerable to synaptopathy than animals, and significant noise-induced synaptopathy may only occur in combination with an audiometric loss. The effects of cochlear synaptopathy may be more important in older listeners. Aging is associated with synaptopathy in rodent models, and there is now good histological evidence for substantial age-related synaptopathy in humans. The perceptual consequences of this pathology are unclear, but synaptopathy could contribute to age-related declines in listening ability, particularly speech perception in noise which is poorly explained by the clinical audiogram.
Dalhousie University, Canada
1978.3-1981.3 Nanjing Medical College, B.S.
1986-9- 1989.7 East-South University, Nanjing, China, Auditory Physiology, M.Sc
1996.8-1998.8 State University of New York at Buffalo, Audiology MA
1996.8-2000.9 State University of New York at Buffalo, Hearing Science, Ph.D.
- Cochlear protection by genetic manipulation.
- The working mechanisms of ribbon synapses.
- Noise induced synaptic damage in cochlea and the impacts of this damage on hearing functions.
- Impact of hearing loss on cognitive functions.
Noise-induced synaptopathy resulting from repeated noise exposure at different sound levels
Noise-induced synaptopathy (NIS) has become a serious concern as a possible effect of noise induced hearing loss (NIHL) since a massive synapse loss was reported following a brief noise exposure that did not cause permanent threshold shift (PTS). However, the impact of NIS on hearing may have been exaggerated. Coding-in-noise deficits (CIND) have been predicted to be the major consequence of the synaptopathy. But to date, there is no robust evidence for CIND after such noise exposures. Noise-induced synapse loss is likely reversible, at least in part. In recent studies, we failed to find CIND after such a noise exposure by testing cochlear responses to amplitude modulation. Further evidence is reported here from two experiments. In Exp. 1, both C57 mice and guinea pigs were given two brief noise exposures. A much smaller synapse loss was seen immediately (1d) after the second noise exposure and did not result in a significant increase in permanent synapse loss when tested 4 weeks after the 2nd noise exposure. In Exp. 2, noise exposure was given to guinea pigs at a lower level (91 dB SPL), but repeated to reach equal energy to the brief noise exposure that caused significant synapse loss in this species (106 dB SPL, 2 hrs). No synapse loss was seen after the repeated noise at the low level. This suggests that the equal-energy hypothesis for NIHL is not valid for NIS, which is not likely to occur after noise exposures at the up limit under the current safety standards.
University of Western Australia, Australia
Helmy Mulders is an auditory neuroscientist with a particular interest in centrifugal control and plasticity. The last 9 years her focus has been the study of the neural substrate of tinnitus in an animal model, using a variety of techniques such as single neuron electrophysiology, behavioural studies, immunocytochemistry and RT-PCR. She works in the Auditory Laboratory at the University of Western Australia (UWA) and has published >55 peer reviewed journal articles and book chapters. She is a full-time academic, coordinating and teaching into the undergraduate and postgraduate Neuroscience programs and the Master of Clinical Audiology at UWA.
Central plasticity after hearing loss – therapeutic implications for tinnitus
Tinnitus is a common phantom auditory perception that can severely affect quality of life. The precise neural mechanisms remain as yet unknown which is likely to be a contributing factor to the fact that there is no cure. Tinnitus is strongly associated with cochlear trauma and hearing loss, which evokes plasticity in the central auditory system, resulting in altered levels and patterns of spontaneous activity. It has been suggested that tinnitus is generated from these alterations in neural activity in combination with changes in non-auditory regions such as frontostriatal circuitry. This latter circuitry may be involved in sensory gating of non-salient information at the level of the thalamus. Therefore, a breakdown of this mechanism could potentially cause altered neural signals in the auditory system to reach the cortex, leading to perception. In our laboratory, we use rat and guinea pig models of cochlear trauma and tinnitus to investigate the relationship between frontostriatal circuitry and the auditory system and the mechanisms of sensory gating. Electrophysiological recordings in auditory thalamus in animals with and without cochlear trauma and/or tinnitus are combined with stimulation of elements of the frontostriatal circuitry. Stimulation is achieved invasively by focal electrodes or non-invasively by repetitive transcranial magnetic stimulation. Our results demonstrate that activation of frontostriatal circuitry has a functional effect on activity in auditory thalamus and that this effect changes after cochlear trauma. Our data support the notion that sensory gating is involved in tinnitus generation which has implications for potential therapeutic targets.
Sunnybrook Research Institute, University of Toronto, Canada
Dr. Dabdoub is the research director of the Hearing Regeneration Initiative at Sunnybrook Research Institute and an associate professor in the Department of Otolaryngology Head & Neck Surgery and Department of Laboratory Medicine at the University of Toronto, Canada.
Dr. Dabdoub’s research program focuses on discovering and elucidating the molecular signaling pathways involved in the development of the mammalian inner ear. The goal of his laboratory is to connect developmental biology to inner ear diseases and ultimately to regenerative medicine for the amelioration of hearing loss through cellular regeneration of sensory hair cells and primary auditory neurons.
Connecting the Cochlea to the Brain: Development and Regeneration of the Primary Auditory Neurons
Primary auditory neurons, also known as spiral ganglion neurons, are responsible for transmitting sound information from cochlear sensory hair cells in the inner ear to cochlear nucleus neurons in the brainstem. Auditory neurons develop from neuroblasts delaminated from the proneurosensory domain of the otocyst and keep maturing until the onset of hearing. These neurons degenerate due to noise exposure and aging resulting in permanent hearing impairment. Thus, auditory neurons are a primary target for regeneration for the amelioration of hearing loss. Glial cells surrounding auditory neurons originate from neural crest cells and migrate to the spiral ganglion during development. These glial cells survive after neuron degeneration and loss making glial cells ideal for gene therapy and cellular reprograming.
Using combinatorial coding, we have successfully converted glial cells into induced neurons in vitro and assessed the induced neurons using morphology, immunohistochemistry, their ability to innervate peripheral and central targets, as well as transcriptomic analyses comparing their properties to endogenous auditory neurons and control cells. Furthermore, we have developed a preclinical mouse model of neuropathy with the aim of converting glial cells in vivo. Neuron replacement therapy would have a significant impact on research and advancements in cochlear implants as the generation of even a small number of auditory neurons would result in improvements in hearing.
McMaster University, Canada
Ian C. Bruce, Ph.D. is a Professor and Associate Chair of Graduate Studies in Electrical & Computer Engineering at McMaster University in Hamilton, Ontario, Canada. He is engaged in interdisciplinary research and academic activities in electrical & biomedical engineering, neuroscience, psychology, and music cognition. His research is focused on applying cutting-edge experimental and computational methods to better understand, diagnosis and treat hearing disorders. Research applications pursued by his lab include hearing aids, cochlear implants, diagnosis & treatment of tinnitus, speech & music perception, digital speech processing, and genetic hearing loss.
Dr. Bruce received the B.E. (electrical and electronic) degree from The University of Melbourne, Australia, in 1991, and the Ph.D. degree from the Department of Otolaryngology, The University of Melbourne in 1998. From 1993 to 1994, he was a Research and Teaching Assistant at the Department of Bioelectricity and Magnetism, Vienna University of Technology, Vienna, Austria. He was a Postdoctoral Research Fellow in the Department of Biomedical Engineering at Johns Hopkins University, Baltimore, MD, USA, from 1998 to 2001, before moving to McMaster in 2002. Dr. Bruce is an Associate Editor of the Journal of the Acoustical Society of America, a Fellow of the Acoustical Society of America, a Member of the Association for Research in Otolaryngology, and a Registered Professional Engineering in Ontario.
Computational modeling of diverse forms of cochlear pathology
Computational models of auditory processing can be useful tools in understanding the normal function of the ear and the auditory pathways of the brain. In addition, computational models that can incorporate pathology may be helpful in understanding the effects of hearing impairment and in the development of improved devices for those with hearing loss, such as hearing aids and cochlear implants. However, incorporating pathology into physiological models of auditory processing faces some difficulties including: i) incomplete accuracy in even explaining normal function, ii) limited physiological detail regarding the site of the pathology, and/or iii) uncertainty in explaining a human subject’s experimental data due to a lack of definite knowledge about the pathology that they have.
In this talk, I will review efforts by a number of research groups, including my own, to develop, validate and apply models of a diverse range of cochlear pathologies. Methodologies for modeling outer hair cell impairment, inner hair cell impairment, cochlear synaptopathy, and pathologies caused by genetic mutations will be explored. Approaches to overcoming uncertainties about patterns of pathologies in human subjects will also be discussed.
Harvard Medical School, USA
Daniel is the son of two fashion designers who raised him in Connecticut surrounded by other designers and creative types. Daniel studied English literature and Psychology at a liberal arts college in Virginia (Univ. Richmond) and didn’t discover science until the end of his university days, while studying abroad in Argentina. Daniel moved west for his graduate education, receiving his PhD in Biological Sciences at the University of California Irvine and carried out postdoctoral research on cortical plasticity with Mike Merzenich at the University of California, San Francisco. After a few years on the faculty at Vanderbilt, Daniel moved his lab to Mass. Eye and Dear in 2010 and joined the Dept. of Otolaryngology at Harvard Medical School. Daniel is now an Associate Professor at Harvard Medical School, the Director of the Lauer Tinnitus Research Center and holds the Amelia Peabody Research Chair in Otolaryngology at Massachusetts Eye and Ear.
Neural circuit logic for modulation and gain control in the auditory cortex
Sensory brain plasticity exhibits a fundamental duality, a yin and yang, in that it is both a source and possible solution for various types of perceptual disorders. When signaling between the ear and the brain is disrupted, the balance of excitation and inhibition tips toward hyperexcitability throughout the central auditory neuroaxis, increasing the ‘central gain’ on afferent signals so as to partially compensate for a diminished input from the auditory periphery. By performing chronic 2-photon imaging from ensembles of genetically identified cortical cell types before and after auditory nerve damage, we can track the imbalance between excitatory and inhibitory networks following cochlear neuropathy and relate these changes to perceptual difficulties encoding signals in noise. These findings underscore the ‘yin’, the dark side of brain plasticity, wherein the transcriptional, physiological and neurochemical changes that compensate for the loss or degradation of peripheral input can incur debilitating perceptual costs. We are also committed to understand the ‘yang’ of brain plasticity, how the remarkable malleability of the adult brain can be harnessed and directed towards an adaptive – or even therapeutic – endpoint. Our ongoing research suggests that a cluster of cholinergic cells deep in the basal forebrain may hold the key to adjusting the volume knob in hyperactive cortical circuits. We use a combination of optogenetics, photometry and large-scale single unit recordings to tune into the dialog between the basal forebrain and key cell types in the auditory thalamus and auditory cortex of awake mice engaged in active listening tasks. In parallel, we use high-channel EEG, pupillometry and psychophysical approaches to reveal the hidden neural signatures of tinnitus and disordered speech perception in human subjects. These efforts have culminated in the development of audiomotor training platforms that suggest new treatment avenues for various classes of perceptual disorders.
Harvard Medical School, USA
Sharon G. Kujawa, Ph.D. is an Associate Professor of Otolaryngology, Harvard Medical School. She is the Director of Audiology Research and a Senior Scientist in the Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA. Work in the Kujawa laboratory seeks to clarify mechanisms and manifestations of common forms of acquired sensorineural hearing loss in humans, particularly those due to aging and exposure to noise and ototoxic drugs. A major focus of current work is in understanding how these etiologies cause loss of cochlear synapses, determining the functional consequences of that loss, and how the degeneration can be manipulated pharmacologically to reveal mechanisms and provide treatments.
Noise-Induced Cochlear Synaptopathy With and Without Sensory Cell Loss
Noise exposure is a primary cause of acquired sensorineural hearing loss affecting many millions, worldwide. After decades of focus on the sensory hair cell component of noise-induced hearing loss, animal studies have more recently begun to address peripheral neural consequences of such exposure. This work has identified the loss of inner hair cell synapses with cochlear afferent neurons as a common and early manifestation of noise damage, across all mammalian species evaluated thus far. Our early studies of noise-induced cochlear synaptopathy concentrated on exposures producing large but reversible threshold shifts without hair cell loss. This model provided a powerful approach to initial studies because it allowed a separation of the functional deficits due to synaptopathy from those due to hair cell loss, and because clues present in suprathreshold responses could be interpreted without an audibility confound. However, noise can produce temporary and/or permanent threshold elevations, with and without hair cell loss, depending on characteristics of the exposure and susceptibilities of the individual. Thus, although the synaptopathy can be hidden in a normal audiogram, the real challenge to diagnosis may be in mixed – neural plus sensory—pathology. Here, we consider cochlear structure and function after noise exposure with and without sensory cell loss.
University at Buffalo, USA
Richard Salvi, SUNY Distinguished Professor and Director for the Center for Hearing and Deafness at the University at Buffalo, is an auditory neuroscientist with a long standing interest in noise-induced hearing loss, ototoxicity, aging, tinnitus, hyperacusis, functional brain imaging, neuroplasticity, cell death, regeneration and otoprotection.
Animal Models of Loudness Hyperacusis, Sound Avoidance Hyperacusis and Pain Hyperacusis
Hyperacusis is a debilitating condition in which everyday sounds are perceived as extremely loud, sometimes painful resulting in sound avoidance. To test for loudness hyperacusis, we measured reaction time vs. intensity (RT-I) function in rats. RTs decreased with increasing intensity. Sodium salicylate treatment caused RT-I functions to become steeper and RTs shorter than normal, behaviors indicative of loudness hyperacusis. High-frequency, noise-induced hearing loss induced loudness recruitment at high frequencies, but hyperacusis at the low frequencies. To test for sound-avoidance hyperacusis, rats were placed in an arena with a bright, open area and a dark enclosure equipped with loudspeaker. Normal rats spent ~95% of the time in the dark, but when sound was introduced into the dark box, the rats shifted their preference to the bright area. After inducing a high-frequency hearing loss, sound stimulation became even more effective at driving the rats from the dark box into the open area. To determine if sound could modulate pain, we measured the time for a rat to remove its tail from hot water in the absence or presence of sound. Tail withdrawal latency increased for intensities from 80 to 90 dB indicating that moderate-intensity sounds increased thermal pain tolerance (i.e., hypoalgesia). However, withdrawal latencies decreased at higher intensities. At 120 dB, latencies were significantly shorter (i.e., decreased pain tolerance, hyperalgesia). These results provide evidence for multisensory integration between auditory and pain pathways. Prior opioid treatment caused sounds to enhance thermal pain. These behavioral paradigms new tools to study various forms of hyperacusis.
University at Buffalo, USA
Matthew Xu-Friedman grew up in La Jolla, California, and attended Yale University for a bachelor’s degree in Psychobiology. He received his PhD from Cornell University in the Section of Neurobiology and Behavior for work on the knollenorgan sensory system of electric fish with Carl Hopkins. He did a postdoc in the Department of Neurobiology at Harvard Medical School in synaptic physiology of the cerebellum with Wade Regehr. He is currently a Professor in the the Department of Biological Sciences at the University at Buffalo, State University of New York, where his lab focusses on the physiology and function of auditory nerve synapses.
Karolinska Institute, Sweden
Dr. Canlon’s laboratory is working to understand the normal hearing process and causes of hearing deterioration as a step toward the prevention of hearing loss. In an effort to learn how hair cells and nerve fibers become damaged, Dr. Canlon’s group is conducting molecular experiments to identify key players in this process. Dr. Canlon has recently discovered that the cochlea contains a self-sustained circadian clock, which continues to tick in culture. The current research focus is to understand the molecular mechanisms through which the circadian clock regulates cell and organismal metabolism and the reciprocal feedback of metabolism on circadian oscillators in the inner ear. We anticipate that a better understanding of clock processes will lead to innovative therapeutics for a spectrum of auditory disorders.
Dr. Canlon has been head of the Experimental Audiology Section at the Karolinska Institute for the past 25 years and has had numerous major administrative duties at the Karolinska Institute. She is currently Editor-in-Chief for Hearing Research. She received her bachelor degree from Brooklyn College, City University of New York and then her Master´s at the University of Michigan. She then moved to Stockholm and obtained her Ph.D. at the Karolinska Insitute. After a post-doc at Institute Pasteur, Paris and CNRS-INSERM, Montpellier she established her laboratory at the Karolinska Institute and became professor in 2001.
The clockwork of the cochlea
This lecture is based on our discovery showing that the peripheral auditory system, the cochlea, is regulated by a molecular circadian clock, which opened an exceptional opportunity for understanding unique features of the auditory system that were previously unknown. We have found, in the mouse, that the same noise exposure causes greater physiological and morphological consequences during nighttime compared to daytime exposures. Consequently, a robust molecular circadian clock machinery including the circadian genes Per1, Per2, Bmal1, and Rev-Erb, was identified in the cochlea and was found to regulate this differential sensitivity to day or night noise exposure. Using RNAseq we recently identified 7211 genes in the cochlea that have circadian expression and a large proportion of them regulate cell signaling, hormone secretion, and inflammation. Nearly ⅔ of these genes show maximal expression at nighttime, a finding which can only be captured when performing analyses around the clock. Why is this important? A “broken” clock may enhance the risk for developing hearing loss, as it has been shown for a wide variety of diseases including metabolic, cardiovascular, neoplastic and inflammatory disorders. However, before investigating the consequences of clock disruption on auditory functions, a better understanding of the circadian components that characterize the auditory system are needed.
University of Michigan, USA
Rick Altschuler received his Ph.D. in Anatomy at the University of Minnesota in 1978 and then moved to the Lab of Neuro-Otolaryngology at NINCDS at the National Institutes of Health where he studied neurotransmitters and receptors of the cochlea and cochlear nucleus. He joined Kresge Hearing Research Institute at the University of Michigan in 1985 where he is now a Professor in the Departments of Otolaryngology and Cell and Development Biology. He also has an appointment at the VA Ann Arbor Healthcare System. He is currently studying noise induced and age-related hearing loss and vestibular dysfunction, tinnitus and mechanism based therapeutic interventions for prevention and treatment.
Noise induced cochlear hair cell loss, synaptopathy and tinnitus: Mechanisms and strategies for prevention and repair
Noise overstimulation can lead to temporary and/or permanent shifts in auditory brain stem response (ABR) measures and can lead to changes in Gap Detection and/or behavioral responses indicative of tinnitus in animal models. Losses in hair cell function(s) or loss of hair cells is linked to ABR temporary or permanent threshold shifts respectively. Loss of inner hair cell-auditory nerve connections (synaptopathy) can be associated with changes in ABR suprathesholds and loss of dynamic range. There are successful strategies for prevention of noise-induced hair cell dysfunction and/or loss including anti-oxidant treatment. Post-noise treatments can reduce further loss in hair cells that have not entered cell death cycles, however there are not yet any successful treatments to replace lost hair cells. There are successful strategies to prevent noise-induced cochlear synaptopathy such as use of anti-excitotoxicity agents as well as successful strategies to induce re-connection of lost synapses including use of neurotrophins. While noise-induced effects in the cochlea are considered inducing agents for the progression of events leading to tinnitus, the specific necessary or sufficient cochlear changes have yet to be firmly identified. Our studies are testing synaptopathy as an inducer for noise-induced tinnitus in the rat model (using broad band and small arms fire like noises) and determining if prevention or repair will influence the progression and reduce the incidence of tinnitus.
CNRS/University of Toulouse, France
Pascal Barone is the team leader of C3P (Crossmodal Compensation and Cortical Plasticity) a team dedicated to understanding mechanisms of cortical plasticity in normal subjects and deaf patients. My early works exploring the neuronal mechanisms of prenatal axogenesis have clearly demonstrated the high specificity of the cortical connectivity during early development, a result that have a strong theoretical influence in understanding the development of sensory functions and the impact of early sensory deprivation on brain reorganization. Based on a tied collaboration with the ETN department at the Purpan hospital in Toulouse, I presently conduct a multidisciplinary approach to better understand the neuronal mechanisms of brain plasticity in deafness in animal models and in humans. Our work is based on a multidisciplinary approach at both a fundamental and a clinical level, it encompasses behavioral and brain imaging (PET) studies in patients as well as in normal hearing subjects. Because the success of rehabilitations relies on the functional plasticity in the auditory system, our work is aimed at understanding the reorganization of the cortical network involved in auditory processing that occurs during deafness and following the progressive recovery through a cochlear implantation or with hearing aids. Our complementary projects aim to a better understanding of hearing restoration coupled to the evaluation of rehabilitation strategies.
Functional segregation in the auditory cortex: evidence from brain reorganization in unilateral deaf patients
In patients with unilateral hearing loss (UHLp), binaural processing is obviously disrupted and spatial localization of the sound source is impaired as well as the ability in understanding speech in noisy environments. At the brain level, a limited number of studies have explored the functional reorganisation that occurs in the adult after a unilateral deafness. We conducted an original study aimed at investigating in UHLp the relationships between the severity of unilateral hearing loss, the resulting deficit in binaural processing and the extent of cortical reorganisation across the auditory areas.
We have recruited 14 UHL patients (hearing loss 37-120 dB HL) and aged-matched hearing controls. All subjects were evaluated for free-field sound localization abilities and speech in noise comprehension (French Matrix test). All subjects went through a fMRI protocol to evaluate the activation pattern across auditory areas during a natural sounds discrimination task. First, brain imaging analysis clearly demonstrated that in non-primary areas (NPAC), UHL induces a shift toward an ipsilateral aural dominance. Such reorganization, absent in the PAC, is correlated to the hearing loss severity and to lower spatial localization ability performances. Second, a regression analysis between brain activity and patient’s performances, clearly demonstrated a link between the sound localisation deficit and a functional alteration that impacts specifically the posterior auditory areas known to process spatial hearing. On the contrary, the core of the auditory cortex appeared relatively preserved and maintains its normal implication in processing non-spatial acoustical information.
Altogether our study adds further evidences of a functional dissociation in the auditory system and shows that binaural deficits induced by UHL affect predominantly the dorsal auditory stream.
University of Michigan, USA
Professor Susan Shore has been working in the field of neuroanatomy and neurophysiology of the auditory system for more than two decades. Over the past decade, Susan’s work has focused on multisensory integration in the cochlear nucleus and its role in tinnitus after noise-induced cochlear damage. Her team’s studies of multisensory timing-dependent plasticity have resulted in a novel treatment for tinnitus in guinea pigs and humans, which is currently being evaluated in a second clinical trial.
Cochlear damage may be necessary but not sufficient to induce tinnitus
While hearing loss is associated with tinnitus, the relationship is not causal as people without audiometric hearing loss can develop tinnitus and not all people with audiometric hearing loss develop tinnitus. Likewise, in animal models, noise exposures that produce only temporary threshold shifts result in behavioral evidence of tinnitus only in about half of the exposed animals. Following tinnitus-induction using narrow band noise exposures, we have demonstrated that dorsal cochlear nucleus fusiform cells show BF-specific alterations in stimulus-timing dependent plasticity, increased spontaneous synchronization, and increased spontaneous firing rates (SFR) in animals with behavioral evidence of tinnitus [1, 2]. Conversely, animals without plasticity-induced changes in fusiform cells, but equivalent degrees of cochlear damage, did not show evidence of tinnitus. Similarly, bushy cells in ventral cochlear nucleus showed BF-restricted increased SFR and cross-unit synchrony in animals with behavioral evidence of tinnitus but not in those without such behavioral evidence, even though ABR thresholds and suprathreshold wave-1 amplitudes were equivalent , suggesting similar cochlear damage in the two groups. Changes in glutamatergic and cholinergic transmission were also changed in these animals in a tinnitus-specific manner [4, 5].
Conclusions: These results suggest that hearing loss, whether visible or ‘hidden’, is insufficient by itself to produce a tinnitus phenotype. Changes in cochlear output after noise exposure require accompanying plastic changes in recipient neurons in the CNS in order to result in the physiological and behavioral signatures of tinnitus.
 C. Wu, D.T. Martel, S.E. Shore, Increased Synchrony and Bursting of Dorsal Cochlear Nucleus Fusiform Cells Correlate with Tinnitus, J Neurosci, 36 (2016) 2068-2073.
 K.L. Marks, D.T. Martel, C. Wu, G.J. Basura, L.E. Roberts, K.C. Schvartz-Leyzac, S.E. Shore, Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans, Sci Transl Med, 10 (2018).
 D. Martel, S. Shore, Ventral cochlear nucleus bushy cells contribute to enhanced auditory brainstem response amplitudes in tinnitus., ARO, 2019, DOI (2019).
 A.N. Heeringa, C. Wu, C. Chung, M. West, D. Martel, L. Liberman, M.C. Liberman, S.E. Shore, Glutamatergic Projections to the Cochlear Nucleus are Redistributed in Tinnitus, Neuroscience, DOI 10.1016/j.neuroscience.2018.09.008(2018).
 L. Zhang, C. Wu, D.T. Martel, M. West, M.A. Sutton, S.E. Shore, Remodeling of cholinergic input to the hippocampus after noise exposure and tinnitus induction in Guinea pigs, Hippocampus, DOI 10.1002/hipo.23058(2018).
Benjamin D. Auerbach
University at Buffalo, USA
Dr. Benjamin D. Auerbach graduated from Cornell University with a Bachelor’s in Biological Sciences and received his Ph.D. in Neuroscience from the Massachusetts Institute of Technology. He is currently a Research Assistant Professor at the Center for Hearing and Deafness at the University at Buffalo. Dr. Auerbach’s research interests include auditory plasticity, hyperacusis, and autism spectrum disorders.
Comparing auditory circuit disruptions across diverse models of hyperacusis
Hyperacusis is a complex hearing disorder that encompasses a wide-range of reactions to sound, including excessive loudness, increased aversion/fear of sound, or even pain. While often associated with hearing loss and tinnitus, sound tolerance disturbances are actually observed across a broad spectrum of neurological disorders. Thus, hyperacusis is diverse in both its etiology and phenotypic expression, and it is imperative to consider this diversity when attempting to elucidate its physiological mechanisms. Here we will describe a series of recent studies utilizing novel behavioral paradigms aimed at distinguishing between the diverse ways in which sound perception may be altered in hyperacusis. We have combined these novel assays with acute and chronic in vivo electrophysiological recordings to examine the neurophysiological correlates of hyperacusis using three distinct models: salicylate-induced ototoxicity; noise-induced hearing loss; and an Fmr1 KO rat model of Fragile X syndrome, a leading inherited form of autism that consistently presents with auditory hypersensitivity. This multifaceted approach allows us to determine if different forms of hyperacusis are mechanistically distinct disorders with overlapping presentation, or if they share a common/convergent pathophysiological mechanism.
Enrique A. Lopez-Poveda
University of Salamanca, Spain
Enrique A. Lopez-Poveda, Ph.D. (born 1970) is Associate Professor of Otorhinolaryngology at the University of Salamanca, and the Director of the Auditory Computation and Psychoacoustics Laboratory of the Neuroscience Institute of Castilla y León (since 2003), the Director of the Audiology Group of the Biomedical Research Institute of Salamanca (since 2011), and the Director of the Audiology Diploma of the University of Salamanca (since 2006). He received a B.Sc. in physics from the University of Salamanca in 1993 and a Ph.D. in hearing sciences from Loughborough University in 1996. He was Associate Professor of Medical Physics at the University of Castilla-La Mancha (1998-2003), and a Ramón y Cajal Research Fellow (2003-2008) at the University of Salamanca. He has been on the faculty at Salamanca since 2008, and invited research scientist at the University of Minnesota (2010) and Duke University (2014). His current research interests include (1) understanding and modeling cochlear compression; (2) understanding the roles of olivocochlear efferents in hearing; (3) reinstating the effects and benefits of olivocochlear efferents to the users of hearing aids and cochlear implants; and (4) understanding the factors behind the wide variability in outcomes across hearing-aid and cochlear-implant users. He has been the principal investigator for 19 scientific research projects, and a consultant in two projects funded by the National Institutes of Health. He has authored over 75 papers, one book and three patents on a variety of topics in hearing science. He is (or has been) editor of two books, a member of the editorial board of Trends in Hearing (since 2014), and an associate editor of Journal of the Acoustical Society of America (2012-2015). He is the recipient of the Medal of the Cross to the Naval Merit from the Spanish Navy (1999), the Salamanca Ambassador award from Salamanca Convention Bureau (2013), and the Distinguished Alumnus award from the University of Salamanca Alumni (2013). He was elected Fellow of the Acoustical Society of America in 2009, and of the International Collegium of Rehabilitative Audiology in 2015.
University of Minnesota, USA
Magdalena Wojtczak received her education from Adam Mickiewicz University in Poznan, Poland. She obtained her M.Sc. degree in Physics, and her Ph.D. degree in Physics with the specialty in Acoustics (in 1996). After that, she worked as a Postdoctoral Associate, and then Research Associate in Psychoacoustics Lab led by Neil Viemiester at the University of Minnesota. Currently, she is a Research Associate Professor in Auditory Perception and Cognition Lab at the University of Minnesota. Her research interests are in differences in peripheral auditory processing between ears with normal hearing and sensorineural hearing loss. In particular, she is working towards linking specific perceptual deficits to physiological mechanisms. She was elected a Fellow of the Acoustical Society of America in 2011.
Tinnitus – its relation to hearing loss and cochlear synaptopathy
Tinnitus is characterized by persistent sound perception in the absence of a sound source. The phantom sound is thought to result from maladaptive neural gain compensating for the lack of peripheral input due to hearing loss. However, not all hearing-impaired individuals experience tinnitus and the percept is sometimes observed in individuals with clinically normal hearing. Although it is widely known that peripheral input deprivation is, by itself, insufficient for the development of tinnitus, it likely provides the initial trigger that in combination with changes to central auditory and some non-auditory areas of the brain leads to the phantom sound perception. In individuals with normal hearing, tinnitus may result from loss of synaptic connections between inner hair cells in the cochlea and auditory nerve fibers. Establishing the link between tinnitus in otherwise normal hearing and cochlear synaptopathy is challenging since measures that are affected by synaptopathy in animals do not show consistent patterns that can be related to tinnitus or noise exposure in humans. In fact, there is no established proxy measure for cochlear synaptopathy in humans and its prevalence and perceptual consequences are unknown. Finding reliable measure or a battery of tests is desirable as cochlear synaptopathy has been shown to accelerate loss of hearing sensitivity in laboratory animals. Establishing a link between tinnitus and cochlear synaptopathy, in the absence of clinical hearing loss, could accelerate the development and testing of effective treatments for cochlear synaptopathy in humans. This work is supported by NIH grant R01 DC015987.
University of Pittsburgh, USA
Thanos Tzounopoulos, PhD is a Professor and Vice Chair of Research in the Department of Otolaryngology and the UPMC Endowed Professor of Auditory Physiology at the University of Pittsburgh School of Medicine. He also serves as the Director of the Pittsburgh Hearing Research Center. His research is focused on molecular, cellular, and systems mechanisms underlying normal and pathological auditory processing. He also studies tinnitus and its underlying cellular and molecular mechanisms. More recently, his research has expanded to drug discovery and development for tinnitus. Dr. Tzounopoulos earned his PhD in Molecular and Medical Genetics at Oregon Health and Science University as a Fulbright Scholar and completed postdoctoral training at the University of California at San Francisco and the Oregon Hearing Research Center.
The role of synaptically released zinc in hearing and hearing loss
In my talk, I will discuss the mechanisms via which synaptically released zinc fine tunes synaptic transmission, sound processing, and central adaptations to noise-induced hearing loss.
University of Toronto, Canada
Karen Gordon, PhD, is a Professor in the Department of Otolaryngology and a Graduate Faculty Member in the Institute of Medical Science at the University of Toronto. She works at the Hospital for Sick Children in Toronto, Ontario, Canada, as a Senior Scientist in the Research Institute and an Audiologist in the Department of Communication Disorders. She is Director of Research in Archie’s Cochlear Implant Laboratory and holds the Bastable-Potts Health Clinician Scientist Award in Hearing Impairment and Cochlear Americas Chair of Auditory Development. Karen’s research focuses on auditory development in children who are deaf and use auditory prostheses including cochlear implants. Her work is supported by research funding from the Canadian Institutes of Health Research along with the Cochlear Americas Chair in Auditory Development and generous donations.
Should children with single sided deafness receive a cochlear implant?
We are studying whether children with profound deafness in one ear and normal hearing in the other ear (ie. single sided deafness (SSD)) can benefit from cochlear implantation. Leaving these children’s hearing loss untreated puts them at risk for social, educational and emotional deficits and, over time, allows an aural preference to develop, weakening the potential for bilateral/spatial hearing development. Concurrent vestibular and balance impairments further compromise these children’s access to spatial information. Consequences to academic skills and working memory will be discussed. Of the available treatment options, cochlear implantation provides the best method for providing auditory input to a deaf ear but is not presently considered to be the clinical standard of care in children with SSD and is not suitable in all cases. On the other hand, cochlear implantation could have a particular role in children whose SSD is associated with congenital cytomegalovirus and, when provided with limited delay, is well tolerated as measured by consistent device use. Early outcomes also indicate a reversal of aural preference as input from the cochlear implant restores representation of the previously deprived ear to the auditory brain. We continue to monitor children with SSD who receive cochlear implants to define longer term effects of this intervention on developing auditory and vestibular/balance function.
Boston University, USA
Barbara Shinn-Cunningham joined Carnegie Mellon University in September of 2018 to head the new Carnegie Mellon Neuroscience Institute. She is an MIT-trained electrical engineer turned neuroscientist who uses behavioral, neuroimaging, and computational methods to understand auditory processing and perception. Her interests span from sensory coding in the cochlea to brain networks controlling executive function and their influence on auditory perception (and everything in between). Prior to joining CMU, she served more than two decades on the faculty of Boston University. In her copious spare time, she competes in saber fencing and plays the oboe / English horn. She received the 2019 Helmholtz-Rayleigh Interdisciplinary Silver Medal and the 2013 Mentorship Award, both from the Acoustical Society of America (ASA). She is a Fellow of the ASA and of the American Institute for Medical and Biological Engineers, a lifetime National Associate of the National Research Council, and a recipient of fellowships from the Alfred P. Sloan Foundation, the Whitaker Foundation, and the Vannevar Bush Fellows program.
Cochlear synaptopathy in human listeners?
Many listeners with normal hearing thresholds nonetheless have trouble understanding speech in noisy settings— especially if they are middle-aged or older. Recent advances, driven by studies in noise-exposed and aging animals, suggests that such difficulties can come about when cochlear function is healthy, but there is a loss of auditory nerve fibers conveying information to the brain. This talk will review the evidence that such loss arises in human listeners, and explore how this kind of subtle hearing loss might influence perception in everyday settings.
University of Iowa, USA
Phillip Gander is an assistant research scientist in the Department of Neurosurgery and the Department of Otolaryngology at The University of Iowa. He conducts research using electrocorticography (ECoG) in the Human Brain Research Laboratory of Matt Howard, MD, and using neuroimaging (PET, EEG) in the Iowa Cochlear Implant Clinical Research Center. With the unique opportunities afforded by both research environments he investigates questions related to auditory object processing in collaboration with Tim Griffiths, MD, Newcastle University. He previously worked as a research fellow at the National Biomedical Research Unit in Hearing, Nottingham, UK with Deb Hall. Phillip received his PhD in Psychology, Neuroscience, and Behaviour in 2009 from McMaster University, Hamilton, ON, where he worked with Larry Roberts and Laurel Trainor.
Phillip’s research focus is auditory cognition from the perspective of cognitive neuroscience. Using psychophysics and neuroimaging he studies how the auditory system forms perceptual representations and the factors that contribute to their formation including learning, memory, and attention, under normal conditions and when they are disordered (e.g., hearing loss, cochlear implants, and tinnitus). In addition to investigating the brain bases of sound processing he places a strong emphasis on translating basic scientific findings into benefits for patients.
Human intracranial recordings during tinnitus perceptual change
Advances are being made regarding putative neural mechanisms for tinnitus within animal models, however difficulty remains regarding the extent these models relate to factors that are relevant to the human experience of tinnitus. These limitations include neurophysiological correlates, changes in perceptual strength, degree of distress, and amount of impact on cognition and quality of life. An important step in the utility of animal models is to find similarities among these characteristics to the human experience of tinnitus. The most tractable among them is the category of neurophysiological correlates, unfortunately, clear patterns in measures of human brain activity related to tinnitus remain elusive. The work outlined in this presentation covers recent investigations of intracranial EEG in medically refractory epilepsy patients. Results from two patients are described measured during a perceptual manipulation of tinnitus using a 30s white noise residual inhibition paradigm. Wide spread activity throughout the brain was found during a change in tinnitus intensity, along with focal cross-frequency activity changes, which are proposed as hubs for oscillatory coupling of activity related to distinct functions of a broader tinnitus network. The results align with models of tinnitus activity generated from human non-invasive recordings. In one patient, stimulation of Heschl’s gyrus was possible to explore the potential for perceptual modulation of tinnitus. After stimulation, effects similar to residual inhibition were described by the patient. Importantly the patient reported no change in hearing function during stimulation, which challenges the idea that tinnitus has functional equivalence to normal auditory perception.