On 1 July, Mark J. Reed wrote:
> Most people envision a conveyor-belt or assembly-line approach where
> pressure waves impact the ear drum, the frequencies are extracted,
> the result is scanned for important stuff to move into short-term
> memory and impinge upon our awareness, which is then analyzed for
> speech content which is then decoded.
>
> My understanding is that the identification of "speech", and separation
> thereof from surrounding sounds, happens at a surprisingly early stage,
> bypassing much of the conscious awareness of the listener and the
> analysis through which other sounds go. In part, this must be so
> because we perceive phonemes at a faster rate than we could decode them
> individually. I'll have to look up the relevant papers if you want
> a citation;
No need. I think we're
both saying more or less the same thing!
I guess I didn't understand you the first time around.
<snip>
> It is, of course, simplistic to divide things up into "left-hemisphere"
> and "right-hemisphere" things, and the whole "left-brained people vs.
> right-brained people" pop psychology of a few years back was just silly.
Agreed.
> What I find interesting is the apparently heavy use in speech processing
> of brain bits normally used for non-speech-related sound processing.
Me too --- especially, coming as I do, from a speech-language therapy
perspective.
> But of course it's impossible to tell from the article if the study
actually
> demonstrates anything new or significant. I just thought it worthy of
> passing along.
It was. Thanks.
Dan Sulani
---------------------------------------------------------------
likehsna rtem zuv tikuhnuh auag inuvuz vaka'a
A word is an awesome thing.