Hearing is defined as sound detection. Sound perception is termed audition. Paying attention to sound is listening.
Sensing sound involves percussive perception. One naturally tends to think of vision processing as forming mental spatial patterns, while hearing nominally creates temporal patterns. But the processes involved to deliver sensations are not so simple. Echolocation is perceived by the ears, yet the brain processes its reception into spatial imagery.
Sound comprises a sequence of audible mechanical waves of oscillating pressure moving through one or more media: air or other gases, liquids, or solids. Like light waves, sound waves can be reflected, refracted, or attenuated by the medium they travel through. Attenuation of sound through still air or water because of viscosity is negligible. But wind or water movement refracts sound, either dispersing or focusing the waves.
Sound travels through gases, plasma, and liquids as longitudinal waves. Sound through solids travels as both longitudinal waves and transverse waves. Longitudinal waves run the same direction of vibration as travel, while transverse waves (in solids) oscillate perpendicular to the direction of propagation.
A sound wave periodically displaces the medium it travels through, the displacement periodicity being the inverse of its frequency. The energy carried by a sound wave converts back and forth between the potential energy or strain of the matter and the kinetic energy of the oscillations of the medium.
As with other waves, sound is a complex phenomenon, with characteristics of frequency, wavelength, amplitude, pressure, intensity (energy density by velocity), and direction. For example, sound can be focused like an optical lens focuses light. A sonic lens can squeeze a sound wave down to a spot smaller than its wavelength, which was long thought impossible.
The ancient Tibetan singing bowl is a bronze vessel that makes a sustained ringing sound when rubbed around the edge by a leather mallet. The Tibetan singing bowl holds a key to sonic fluid dynamics. Rubbing a singing bowl vibrates the water inside the bowl at a frequency that creates Faraday waves, which oscillate at half the speed of the bowl’s vibrations. Water drops that break off from the Faraday waves levitate inside the bowl: sitting on the surface and skipping like stones across water.
Sound frequency is measured in hertz (Hz), coined in honor of German physicist Heinrich Hertz, who proved in 1889 that electromagnetic waves existed.
The 2 intertwined facets of hearing are how an animal hears, and what it is capable of hearing.
While overlap is ubiquitous, the range of hearing varies considerably among animals. From an evolutionary perspective, hearing is attenuated to communication and survival needs, such as for hunting or avoiding becoming prey.
Compared to many mammals, humans have a narrow hearing range: 20–20,000 Hz. The human ear is better at deep sounds, with drop-off at higher pitches. Unsurprisingly, human hearing is optimized to the vocal range of the human voice.
Like all mammals, humans’ hearing changes during life. Infants and children have very sensitive hearing. Later in life, high-pitched sounds become harder to hear.
Elephants hear from at least 14–12,000 Hz. Elephants communicate long distances at low frequencies, which carry well without distortion.
Dogs have a range of 40–46,000 Hz, seals 200–55,000 Hz, rodents 1,000–100,000 Hz, and dolphins 70–150,000 Hz, as well as possessing echolocation as a sonic wave-based sight sense.
Ear placement and shape serve a variety of functions, including sensitivity to sound and ability to sense the direction of sound.
Owls have asymmetrical ears: one lower on the skull than the other. Sounds from a single source reach the ears at slightly different times. This binocular hearing lets owls pinpoint the source of a sound with tremendous positional accuracy.
A rainforest katydid was the first invertebrate to have the 3-stage hearing system common among vertebrates.
Mammalian hearing starts with an airborne pressure wave thumping the eardrum. The drum jiggles tiny bones that translate large eardrum vibrations into smaller sloshes, which are sent to a liquid-filled chamber.
Katydid ears sit below the knees, with an eardrum on each side of a leg. They don’t use translating bones. Instead, plates attached to katydid eardrums do the job.
A pressure wave bends each drum inward. That motion pushes a small translator plate on each drum outward. The plate vibration sends smaller ripples into a liquid-filled chamber inside the leg.
Some animals can detect only vibrations passing through water, while others only hear vibrations carried by the ground. But humans can hear vibrations carried through gases, liquids, and solids.
Bone conduction transmits sound waves by vibrations of the skull bones, directly stimulating sound-sensitive cells in the inner ear. Humans hear their own voice via bone conduction. A hearing aid can help some totally deaf people by enhancing bone conduction.
◊ ◊ ◊
Although some people can wiggle their ears, human auricles have little importance compared to other mammals. Many mammals, especially those with large ears, such as rabbits, can move their auricles in various directions to focus sound detection.
In social mammals, ears may act as subconscious communication signals. Dog ears often indicate mood.
◊ ◊ ◊
After being collected by the auricle, sound waves pass through the ear canal to the eardrum, causing it to vibrate.
Human eardrums shift toward the same direction that the eyes are looking. When looking left, for instance, the drum of the left ear is pulled further into the ear, while the right eardrum is pushed out. Eardrum movement begins as early as 10 ms before the eyes even start to move and continues for a few tens of milliseconds after the eyes stop.
Eardrum vibrations are transmitted through the ossicles: the chain of bones in the middle ear. There are 3 ossicles: the malleus (hammer), the incus (anvil) and the stapes (stirrup).
The ossicles sit in the air-filled cavity called the Eustachian tube. The Eustachian tube helps ventilate the middle ear and maintains equal air pressure on both sides of the eardrum.
Eardrum vibrations move the malleus, which hammers the incus, which then stirs the stapes.
Vibrations passing from the relatively large area of the eardrum through the ossicles to a smaller area concentrate vibrational force, amplifying sound. By the time a percussive wave has traveled from the eardrum to the oval window, it has been amplified over 20 times.
When sound vibrations reach the stapes, it rattles the oval window: the membrane-covered opening to the inner ear. The oval window sets in motion the fluids in the 3 semi-circular canals, which help provide a sense a balance. Each canal has hair cells with cilia that act as motion sensors. The hair cells turn fluid motion into an electrical signal that travels to the brain via the vestibular nerve.
The cochlea is a spiral cavity, filled with watery fluid bathing hair cells that detect vibration through cilia. As with the semi-circular canals, the hair cells in the cochlea create an electrical sound signal which is carried by the auditory nerve to the brain. The auditory nerve and the vestibular nerve are 2 branches of the vestibulocochlear nerve.
◊ ◊ ◊
Both hearing and touch rely upon perception of vibrations. A combination of weak sound and weak vibration applied to the skin is better detected than either signal alone. A sound/vibration combination boosts the perceived volume of a sound.
Hearing a sound can boost touch sensitivity, though the sound frequency must correspond with the felt vibrational frequency. A vibration felt at a lower frequency than a sound tends to skew the pitch heard downwards, and vice versa. Sound can bias whether a vibration is felt. The converse is also true.
Hearing may be an evolved sense of touch, better attuned to frequency analysis. The region of the human brain associated with sound processing is activated during touch.
Auditory functioning is incredibly intricate, making vision seem simple by comparison.
By the Whiskers
Using their facial whiskers, harbor seals can sense the size and shape of an object by the wake it makes in the water. They can discern a difference as small as 2.8 cm.
A seal can detect different species of fish by the wake they create, as well as size, letting a seal selectively pursue promising prey.
Other marine mammals with facial whiskers, including otters and sea lions, have whisker sight; an exceedingly useful sense for fishing in murky or low-light water. Or is that whisker hearing?
People hear primarily by detecting airborne sound waves which are collected by the auricle, which also helps locate the direction of the sound’s source. We automatically incorporate reflected ambient sound properties in determining the location of a sound source.
We are even sensitive to the more aesthetic aspects of the environment. Subtle changes are noticeable.
We sense silent objects that obstruct sound. This affords the capability for echolocation. Such hearing sensitivity occurs in other animals and even surpasses ours in various ways.
Memory plays a critical role in audition. Human pitch memory is impressively accurate. In singing or just imagining a song heard before, chances are it will be done in a musical key very close that the one it was recorded in. Small melodic phrases, or even single note, can be accurately remembered by the average person.
Appreciating the deep structure of music is innate. The pitch difference between notes (musical interval) is readily recognized, as are the most common intervals used in popular and classical music.
More abstract qualities are also inherently appreciated, including how melodic themes relate to variations, how notes in successive chords relate to each other and their implied tonal center (root chord), and how melodies resolve to completion.
Despite the fact that most of us are unfamiliar with the technical terminology of music, nearly all of us have implicit knowledge of these characteristics. ~ Lawrence Rosenblum
The deep structure of music underlies how musical pieces are organized to convey emotionally-resonate meaning. Styles of music are readily recognized. Non-musicians can identify similar styles and even have a sense of historical context.
This ability is by no means unique to humans. A carp can tell the difference between baroque and the blues.
Mole Cricket Audition
Some animals improve audition when it matters most.
Mole crickets are large burrowing insects that live underground in extensive tunnel systems. Mole crickets live in every continent except Antarctica, and are commonly considered pests, except in East Asia, where they are considered a tasty fried snack.
A male mole cricket rapidly rubs his forewings together to sing seductive songs to attract a female. But first he builds a concert hall: sculpting an acoustically excellent horn in a burrow which sonically enhances and amplifies his chirps.
Having optimized for audition, he auditions. The male performs with his head at the front of the horn, getting the best possible sound.
Grylloptalpa gryllotalpa is a mole cricket with small wings and shallow teeth on its file. It chirps a quiet song at ~1,600 Hz. Another species, G. vinae, loudly sings at 3,500 Hz with its large wings and deep-toothed file. (Note that higher pitch carries a shorter distance through vegetation, and so may be compensated via volume.)
G. vinae builds a double exponential horn with smooth walls. The 2nd bulb in the concert hall acts as a resistive load for the vibrating wings, concentrating the sound to a disc-shaped patch just above the burrow.
The lower-pitched G. gryllotalpa builds a single-chamber larger cavity that effectively amplifies at that frequency.
Hence, the burrow, wing structure, and singing of male mole crickets are all precisely co-adapted to entice females of their respective species.
Music and language are intimately coupled. ~ American speech researcher Gavin Bidelman et al
Linguistics is the study of language. Phonology is the study of language sounds.
Hearing well demonstrates the interplay between genetic disposition and actualization from life’s experiences. One’s native tongue largely determines the ability to discriminate sounds.
The mishmash of English is auditory oatmeal. By contrast, Chinese is a songfest, with a rich array of nuances in pitches and bends. While all the diphthongs in English are falling, Mandarin has both rising and falling sequences.
Mora is a phonological unit regarding syllable weight, related to emphasis and timing. The Chinese language has a morae variety that far surpasses English.
Perfect pitch is being able to identify a note upon hearing it. Only 8% of native English speakers have perfect pitch, whereas 92% of native Mandarin speakers have perfect pitch.
Genetics establishes only the outliers of perfect or pathetic pitch (tone deafness). Generally, hearing pitch can be learned. Early childhood exposure to subtly tuned sounds enables greater discrimination ability.
Listening is the receiving end of human speech. Our hearing is optimized for our vocal range.
Apes and newborn babies can breathe and drink at the same time. Adult humans cannot. The lower position of the larynx (voice box) prohibits this, for the trade-off of vocal articulation that would otherwise be absent.
Australopiths had a voice box like that of apes, and so limited vocalization ability. The human larynx is an adaptation within the past 3 million years.
The voice box is not the factor in articulation. The shape of the palette, allowing for greater tongue movement, the vocal cord nerve clusters, and other related parts all afford the capability for highly articulated speech that is lacking in many other animals. Alas, the sophistication of the human voice is no match for a songbird.