Music only occurs when there is a

Music allows us to experience a variety of emotions, from profound sadness to ecstatic happiness, and is an essential part of most, if not all, people’s lives. The joy of music permeates all cultures and allows us to connect with each other despite language barriers. Yet people suffering from hearing loss find that hearing aids such as the cochlear implant (CI) significantly changes their perception of music and melodic pitch.  Although a number of suggestions have been made to improve pitch perception in cochlear implant users, this paper will focus on how pitch is encoded in normal hearers, and why cochlear implant users show poor pitch discrimination.Normal hearing vs cochlear implantsSound is a vibration which consists of changes in air pressure. The energy from the vibrations in air molecules translates into sound waves which are picked up by the ear. In normal hearing, the outer ear has many curves (the pinna and meatus) which help determine where a sound is coming from. The sound is directed to the ear canal and the eardrum (tympanic membrane), where the vibrations hit the eardrum and cause the three bones (the malleus, the incus, and the stapes) of the middle ear to move. Together, these three bones are referred to as the auditory ossicles. When the oval window moves, fluid in the inner ear moves, carrying the energy through the cochlea. In the cochlea, thousands of microscopic hair cells can be found on the basilar membrane (BM). Simply speaking, these hair cells are bent by the movement of the endolymph fluid inside the cochlea. The BM differs in stiffness and width which allows different regions to pick up different pressure waves coming through them – akin to a flickering rope. The BM is at its stiffest and most narrow at the base, and flexible and wider at the apex. The ligaments attached to the auditory ossicles can tighten or loosen which help equalize stronger vibrations entering the cochlea in loud environments. Also called the acoustic reflex, this only occurs when there is a consistently loud sound, as the ligaments otherwise lack the time to loosen.As the pressure waves move through the BM, the inner hair cells are pressed against the tectorial membrane (TM). The hair cells are bent against the TM which causes positive ions to flow into the hair cell, releasing neurotransmitters. The hair cells are in groups of varying length, which means stronger pressure waves affect all of them, whereas weaker ones only affect the taller hair cells. This sets off nerve impulses which are passed by the auditory nerve to the hearing centre in the brain. These impulses are then translated as laughter, music and so on by the brain. As such, the process of normal hearing is acoustic in nature rather than electric.Pitch is generally encoded by where the signal from the BM comes from. Certain regions are tuned to a particular frequency, also called critical bands. At the base of the BM, the critical bands are closer in order to pick up higher frequencies, and further apart on the apex for lower frequencies. While all sounds will activate a response in the BM, the strength of this response depends on the property of the sound.Conversely, a cochlear implant has a microphone which picks up sounds which are sent to a sound processor. The sound processor is essentially a miniature computer which analyzes and transforms the sounds into digital information, also called coded signals. The coded signals are sent to the internal implant via a transmitter antenna, where the sound is coded into electrical signals. The electrical signals travel down an electrode array inserted into the cochlea. The electrodes directly stimulate the auditory nerve, sending sound information to the brain where it is encoded.The major difference between normal hearing and cochlear implants is that the first relies on an acoustic method to pick up the sound waves, while the latter depends on a electric-based system. Theories of pitch perception in pure tonesThe fundamental features in music consist of pitch, timbre, and rhytm.While all three are important characteristics of music, pitch is the essential component of melody. It relates to the fundamental frequency (F0) of a sound, and is the scale of musical notes we hear. Humans are able to perceive 20 to 20,000 Hz, which may vary depending on age and nerve damage.Pure tones consist of a single frequency, but are not found in nature. Instead, they are the basic building blocks from which complex sound waves are made. The beauty of the pure tone lies in that its wavelength and sound change in amplitude and phase by linear acoustic models, which have inspired mathematical models attempting to account for sound as a function. In pitch theory and pure mathematics, the Fourier analysis has been used to explain pure tones as mathematical objectsu. For example, 261.4 Hz is considered a middle C, but by using a Fourier analysis the result would be a set of distinct pure tones, each with different strengths. It is the complete picture of this set of pure tones that creates pitch. In pitch, the just-noticeable differences (JNDs) refers to the changes one notices in a pure tone; this ranges between 500 and 2000 Hz.Pitch theory can be traced back to 1831 when Seebeck postulated his theory of periodicity pitch, which explains pitch as complex sounds with unresolved harmonics. In response to this, Ohm formulated his acoustic phase law (18??)  which proposed sound as being a set of pure tones that form a single complex sound wave. Helmholtz expanded on Ohm’s theory and introduced the place theory of pitch: that the cochlea was essentially a spectrum analyzer which encodes different frequencies at different places in the ear. von Bekesy (1972) corroborated his findings and showed that high frequencies caused the base of the BM to vibrate more, whereas low frequencies made the apex vibrate more.Today, the two main theories of pitch concern the place and temporal theories The place vs temporal theory are based on the research of Ohm and Helmholtz, and Seebeck, respectively.The place theory explains pitch as being perceived according to where the vibrations along the BM occur, and suggest that the structure of the BM consist of individual regions which specialize at picking up its own characteristic frequency (CF). Also known as tonotopic organization, the signals from these regions are encoded and sent to the primary auditory cortex to be analyzed.The temporal theory explains the pitch of a pure tone as action potentials (spikes) in the auditory nerve which are fired in response to sound. Hence, the pitch of a pure tone is perceived depending on the period of neuron firing patterns – otherwise known as phase locking. These patterns either consist of single or grouped neurons. The theory posits that when neurons in the auditory system respond to a sound, the spikes are not firing at the same time, but are instead slightly out of phase with each other, which creates a greater frequency of sound. The time intervals between the neurons firing are then pooled together, and analyzed by primary auditory cortex. No human trials have been performed, but in mammals (e.g. cats) phase locking are known to occur between 2-4 kHz.Evidence supporting the temporal theory is that the ability to discriminate pitch deteriorates at frequencies higher than 4-5 kHz, which is a similar frequency to when phase locking starts to deteriorate and the same range for when listeners’ start to lose their ability discern small changes in familiar melodies. This implicates that the temporal theory is vital in order to discern different melodies, and that the upper pitch limits of musical instruments were determined based on the same threshold. However, evidence for pitch perception with high-frequency pure tones above this 4-5 kHz threshold exists, arguing in favour of the place code.ref One study concluded that while pitch discrimination became more difficult up to 8 kHz, it remained constant up to 14 kHz. It is possible that the ability to discriminate pitch is based on timing information at low frequencies, which degrades exponentially the higher the frequency. Due to the invasive nature of recording the human auditory nerve, no direct evidence supporting phase locking exists. It is also unclear how the primary auditory cortex analyzes the signals from the auditory nerve. It is It is similarly difficult to find direct evidence for the place theory; only trials on other mammals exist. Although there is evidence to suggest humans may have superior frequency tuning than some mammals, the place theory alone cannot account for frequencies above 2000 Hz.Theories of pitch perception in complex tonesEven in a loud environment such as a cafe, a person with normal hearing can usually make out the sound of a person talking and the radio playing the latest hits. This is due to that most human speech and musical notes are harmonic complex tones (harmonics), which consist of the fundamental frequency (F0) which is the repetition rate of that particular sound waveform, and overtones with frequencies that are semitones, but still very close to the F0. This relates back to Seebeck’s theory of periodicity pitch, where those with normal hearing are able to determine the fundamental frequency of a sound regardless of other simultaneous sounds: somehow the cochlea filters out complex tones and manages to determine the pitch of a particular sound. Similarly to pure tones, there are place and temporal models of pitch with similar explanations. Studies using low-numbered, resolved harmonics have consistently shown statistically more significant results compared to those using unresolved harmonics (ref ref ref), suggesting that the place theory is imperative in pitch discrimination (ref). In one study no cochlear implant users were unable to discern a pitch of 100 Hz, which they should be able to if temporal information were enough (ref).To conclude, compared to normal hearing a cochlear implant (CI) is an electric-based system rather than acoustic or mechanical. It was created based on Helmholt’z theory and functions as a place encoder, yet the fine-structure of the sound waves are lost as the acoustic input is picked up by a microphone and processed via electrical pulses into the cochlea. Firstly, this argues against that pitch perception is entirely dependent on Helmholt’z theory. Secondly, an explanation for the loss of pitch perception is that the repetition rates associated with fine-structure temporal encoding is not accounted for by a CI, hindering the ability to discriminate pitch. Cochlear implant users seem to be unable to discern pitch differences at frequencies above 300 Hz, despite having implants with multiple-channel sound processors. Furthermore, studies comparing natural hearers to cochlear implant users have consistently shown difficulties for the latter to discriminate pitch: although they have almost no issues discerning rhytm (McDermott, 2004), when they are deprived of rhytm or lyrics, only one-third of them can identify familiar melodies. The studies conducted so far are inconclusive as the auditory system may be based on a temporal or place code – or a combination of the two.