Re: To Matt Pearson
From: | Matthew Pearson <matthew.pearson@...> |
Date: | Wednesday, October 24, 2001, 19:31 |
--- David Peterson wrote:
<<Chomsky's point is that it is possible to abstract away from performance
and talk just about the rules which people have in their heads which allow
them to discriminate between grammatical and ungrammatical utterances.>>
So, then, this would say that in the question, "What are you doing
tonight?", as soon as the person here's "what" they think to themselves, "Ah!
That's an empty object noun phrase that's out of place"? After all, after
"what" could come "is that?" (nominative), "are you doing?" (accusative),
"city do you come from?" (prepositional), etc. True, speech happens quickly,
and so there's no need to predict, since you pretty much have the whole
utterance as soon as it's started, but what if someone hesitated, such as
"What..., uh, just a sec, um..., whatâ*oe" I don't think they'd need to wait
until the end of the utterance to figure out that this thing is going to be a
question about some thing. I mean, it would make more sense to me if, rather
than explaining the issue by saying the person hears the sentence and then
applies all sorts of transformational rules, that either (a) they would just
be used to questions and thus don't have to think of them in that way
anymore, or (b) they predict what comes next. To me it still doesn't make
sense that the mind has to apply rules to something to understand
it--especially such common things like questions.
--- end of quote ---
Well, human language is rule-governed, pure and simple, so there's got to be a set
of rules in our heads somewhere. Generative grammar seeks to find out what
those rules are. What we actually *do* with those rules when we speak and
understand sentences in real time is, in principle (and in practice), a
different issue. As Dirk mentioned in his reply, such questions belong to the
domain of Natural Language Processing rather than Generative Grammar (or
syntactic theory generally).
Of course, the two fields are far from autonomous: You can't have a theory of how
sentences are parsed unless you have a theory of how sentences are put
together, so people who do natural language processing have to talk to people
who do theoretical syntax. Conversely, research into sentence parsing can
provide valuable evidence for or against various models of sentence structure,
so people who do theoretical syntax have to talk to people who do natural
language processing. But the point is that the two fields are distinct, and
deal with different sets of questions.
A couple random comments on your reply: You say: "I don't think they'd need to
wait until the end of the utterance to figure out that this thing is going to
be a question about some thing."
Most people who work on processing argue that we parse sentences 'on line'--that
is, we begin to analyze (and interpret) a sentence the moment we hear the first
word. One way of reconciling this with the notion of transformations is as
follows: In our competence grammar we have a rule (a transformation) which
takes object wh-words like "what" and displaces them from the normal direct
object position to the beginning of the sentence, leaving behind a 'trace', or
gap. So when our on-line parser encounters the word "what", it thinks to
itself: OK, the competence grammar includes a wh-movement transformation, so I
know that this word "what" I have just encountered must be a displaced element.
So I'll listen for a gap at some later point in the sentence. If that gap
occurs in, say, the normal direct object position, I'll know to interpret
"what" as the direct object of the verb.
You go on to say: "I mean, it would make more sense to me if, rather than
explaining the issue by saying the person hears the sentence and then applies
all sorts of transformational rules, that either (a) they would just
be used to questions and thus don't have to think of them in that way anymore, or
(b) they predict what comes next."
This is a very common-sense attitude, but in fact you raise more questions than you
answer. As for your possibility (a): What could it mean for someone to be "used
to" questions? This raises all sorts of issues about the nature of unconscious
knowledge, and how that knowledge gets there in the first place (how does a
child learning his or her first language even know what a question is, for
example?). As for the ability of speakers to predict what comes next (your
possibility (b)): Well, I think most linguists would argue that the parser has
at least limited 'look ahead' capacity. But predicting too much is costly: If
your brain starts to analyze a sentence based on a certain prediction about
what will come next, and that prediction turns out to be incorrect, then it
will have to start all over again, and you've wasted time and memory.
Finally, you remark: "To me it still doesn't make sense that the mind has to apply
rules to something to understand it--especially such common things like
questions." I would contend that it is impossible to understand *anything*
without applying rules to it. That's what "understand" means. To understand
something you have to analyze it, figure out its structure--which is to say you
have to internalize the rules that went into putting that thing together. If
something fails to obey rules, or if the rules it obeys are not discoverable,
then there can be no understanding.
Sorry if I'm pontificating too much, but you raise some interesting questions. I
encourage you to take some more syntax classes, and maybe a course on
processing if you can.
Cheers,
Matt.
Matt Pearson
Department of Linguistics
Reed College
3203 SE Woodstock Blvd
Portland, OR 97202 USA
ph: 503-771-1112 (x 7618)
Replies