THEORY: language and the brain [Interesting article]
|From:||Mark J. Reed <markjreed@...>|
|Date:||Tuesday, July 1, 2003, 5:19|
On Tue, Jul 01, 2003 at 07:41:27AM +0200, Dan Sulani wrote:
> > I find this a bit surprising, since previous research has indicated
> > that speech understanding is not a postprocessor on sound apprehension,
> > but rather bypasses, and happens in parallel with, the normal decoding
> > of sound inputs.
> I'm sorry, but I'm a bit confused by this. Perhaps I don't understand
> what you mean by "normal decoding of sound inputs". My understanding was
> that "meaning" at various levels was extracted at all the stages of
> processing of auditory inputs. If there is research to the contrary,
> I would certainly like to read it!
What I meant was that it appears that speech is handled specially,
not just extracted from sounds after they've already made it
through to our brain.
Most people envision a conveyor-belt or assembly-line approach where
pressure waves impact the ear drum, the frequencies are extracted,
the result is scanned for important stuff to move into short-term
memory and impinge upon our awareness, which is then analyzed for
speech content which is then decoded.
My understanding is that the identification of "speech", and separation
thereof from surrounding sounds, happens at a surprisingly early stage,
bypassing much of the conscious awareness of the listener and the
analysis through which other sounds go. In part, this must be so
because we perceive phonemes at a faster rate than we could decode them
individually. I'll have to look up the relevant papers if you want
a citation; I'm just going on vague recollection at this point.
> About the article. Thanks for pointing it out to us. Perhaps
> I shouldn't be overly critical since it is a news summary and not a journal
No kidding. Since when do the press get anything of a technical nature
right? The headline alone boggled my mind. Mandarin requires more brain
power than English? Guess the Chinese really are just naturally brighter
than us. :)
> > So it's kind of strange that the part of the brain
> > associated with nonlinguistic apprehension of melody is also used when
> > understanding melodic speech.
> IMHO, that is a bit simplistic. I have heard that professional musicians
> also use their left hemispheres to appehend music and there are
> speech-related areas in the right hemisphere, even for English speakers!
Well, true, but mostly in left-handed English speakers, whose brain
arrangements tend to march off in their own direction anyway. :)
It is, of course, simplistic to divide things up into "left-hemisphere"
and "right-hemisphere" things, and the whole "left-brained people vs.
right-brained people" pop psychology of a few years back was just silly.
What I find interesting is the apparently heavy use in speech processing
of brain bits normally used for non-speech-related sound processing.
But of course it's impossible to tell from the article if the study actually
demonstrates anything new or significant. I just thought it worthy of
Thanks for the response.