Case 1
Animals equipped with the sense of hearing, for example humans, are
commonly able to perceive sound sources spatially even though the
sensory apparatus operates essentially in the time domain. This mapping
of auditory space is managed by comparing inputs received at the two
ears and analysing differences of level and timing.
Level differences are due in large part to the presence between the ears
of the subject's head, which blocks and attenuates sound waves passing
from one side to the other as long as their wavelength is of at most the
same order of size as the head itself. For longer wavelengths, the
interaural level differences (ILDs) due to the head's acoustic shadow
are not significant, and this mechanism won't work.
Instead, for lower frequency sounds the brain uses synchronization
differences due to the travel time from one ear to the other. The
timescales involved are extremely short, and when this mechanism was
originally proposed in the nineteenth century it was considered unlikely
that the brain could distinguish timing differences with sufficient
precision. However, a seminal experiment by Lord Rayleigh demonstrated
that this does indeed occur and can be observed in the phenomenon of
binaural beats.
The more familiar monaural beats are heard when two sounds of
different frequencies are played simultaneously: the sound waves
interfere with one another as they shift in and out of phase, causing a
longer-period pulsation of the sound level. In the Rayleigh experiment,
sounds of slightly different frequencies are presented separately
to each ear through rubber tubes; even though the sound waves are not
overlapping and therefore cannot physically cancel or reinforce
one another, beats are still heard as a result of the brain noticing the
phased time differences between the waves at each ear.
The processes behind this sensory feat remained mysterious until Lloyd
Jeffress suggested an ingenious explanation in which arrays of
coincidence detecting neurons in the brain are stimulated by the
input from both ears but with incremental delays introduced along
the pathways from the leading ear. A given neuron will fire only if it
receives the signal pretty much simultaneously from both sides, which
will only happen when the interaural time difference (ITD) is the same
as the "known" delay on the input line to that neuron from the nearside
ear. Each such neuron therefore effectively identifies a particular
direction, and taken together they constitute an azimuthal map of
the sound space.
This appealingly simple model has been widely accepted for over 50
years, and there is evidence that something resembling it does in fact
operate in some species, notably the barn owl. However, recent results
suggest it is not correct for mammals, nor even all birds (the
barn owl's auditory adaptations are quite specialized). Although
coincidence detection remains a key aspect of the process, neither the
locally-coded map (ie, one neuron = one direction) nor the "delay lines"
posited by Jeffress appear to exist.
Instead, the "delay" is really a kind of response suppression caused by
pulsed release of an inhibitory neurotransmitter in parallel with the
nerve impulses registering the sound -- exactly how this achieves the
desired effect is not yet fully understood1 -- while the mapping is probably
non-local, which is to say the final direction is encoded in the
patterns of firing of a number of different neurons, a much more
efficient, but more complex, scheme.
An additional quirk, again reflecting a kind of neurological parsimony,
is that -- unlike nearly everything else -- ITD recognition is not
strictly contralateral (which is to say, managed by the
hemisphere of the brain opposite the relevant ear). Instead, it actually
switches from one side of the brain to the other as the ITDs change,
even though both the sound source and the listener's perception
of it remain on the same side.
The aspects of sound processing mentioned so far are relatively
low-level, occurring long before signal information propagates up to the
cerebral cortex. What happens at the higher levels, presumably leading
to recognition and interpretation, is still pretty much unknown, but
experiments with rats suggest that there is some plasticity in
the responses of sound-associated neurons in the cortex. In other words,
what the higher brain brings to the party is the ability to
learn; not, perhaps, an earth-shattering revelation.
In one example, rats trained to recognise particular simple patterns of
sound then exhibit some structure to their neural firing when given a
complex, high-bandwidth sound stimulus, and that structure seems to have
some statistical correlation with the learned sounds. It is essentially
impossible to make this very specific -- you can't single out a neuron
and say "that's the one!" -- and the models used are simplistic. In
particular, there is an assumption of linearity even though it is likely
that most of what is interesting about the neural behaviour is
non-linear; but you have to start somewhere.
1 If this sounds handwaving, it is. If I were going to choose this case topic to focus on this term I'd want first and foremost to get a handle on what this glycinergic inhibition is about; but I'm not.
1 If this sounds handwaving, it is. If I were going to choose this case topic to focus on this term I'd want first and foremost to get a handle on what this glycinergic inhibition is about; but I'm not.