Re: "Self-Segregating Syntax"?
From: | And Rosta <and.rosta@...> |
Date: | Monday, April 24, 2006, 0:16 |
Eldin Raigmore, On 23/04/2006 19:44:
[...]
> BTW now might be the time, here might be the place, and you might be the
> person to ask about something that has almost bugged me for a little while.
>
> In LTAG (Lexical Tree-Adjoining Grammar) and in Dependency Grammar and a
> few others, what seems to be the "head" of a phrase-or-whatever is the word
> or lexeme or whatever constituent which actually ties its complement(s) --
> all of the "dependents" -- together into a single "tagmeme" (is that a
> correct use of that term?)
(I've never seen the term used. It means something to Pike & followers of
his Tagmemics, but I've never seen any Tagmemics stuff. But the original
sense coined by Bloomfield was too vague for it ever to catch on, as far as
I have seen. I once considered using it in my own work, I admit; but then
I'm a word-fetishist conlanger, aren't I, not your straight up and down
linguistician.)
> But in X-bar theory and a few others like it, the "head" of a phrase-or-
> whatever is that constituent which, taken alone, would serve the same
> function as the entire phrase.
>
> E.g. in Categorial Grammar, the adjective "red" is an operator which:
> * takes a single NP as its only input;
> * produces a NP as its output; and
> * occurs just to the left of its only operand.
>
> So in the noun-phrase "red dog", Dependency Grammar would make "red" the
> head and "dog" the dependent, IIUC.
>
> However, in any sentence of which "red dog" is a constituent, the
> phrase "red dog" can be replaced by the single word "dog" without ruining
> its grammaticality, nor even doing great violence to its meaning.
> Therefore in X-bar theory, the phrase "red dog" has "dog" as its head --
> not "red".
I'm not familiar with LTAG, but at
http://www.cis.upenn.edu/~xtag/tech-report/node18.html I find a tree for "Srini
bought a book at the store", which treats "at the store" as an adjunct, as
X-bar theory does; I'd therefore expect LTAG to treat "red" as an adjunct in
"red dog", i.e. the whole thing being an NP.
DG too is like X-bar.
CatG does indeed seem to handle these cases more insightfully (but it turns out to
be very ugly unless category names allow variables, e.g. so that a PP adjunct
can be defined as something that takes an X as its input and produces an X as
its output).
Word Grammar allows for dependencies that don't match branches in the tree; so it
allows "red" to be a dependent of "dog" and vice versa. One dependency reflects
the meaning, and the other reflects grammatical distribution.
> If I am wrong about there being two meanings to the term "head", then, what
> do you call the "active" constituent of a phrase -- the "operator" rather
> than the "operands" -- in Dependency Grammar?
"Predicate" and "argument" would generally be used, I think.
[...]
>>> The position that the operator takes among its operands, is part of the
>>> definition of the operator-type; as is the number of its operands, and as
>>> are the types of its operands.
>>> It sounds like you're saying that in your conlangs these facts about the
>>> operator-type are always "phonologically coded" into the word for a
>>> particular operator.
[...]
>> They're not necessarily directly phonologically coded (e.g. with a
>> morpheme meaning "has 3 operands"), but
>> the phonological form serves as the
>> address
>
> I think this "addressing" process may be one of those "details about
> generalities" I would appreciate enjoying an explanation of.
I must have made it sound more interesting than it is, then! An example is
that the phoneme string /banana/ serves as a way to identify and locate
the lexical entry for BANANA.
>> for an entry in the lexicon, and the lexical entry will say
>> "has 3 operands".
>>
>> None of my syntaxes have ever bothered with encoding the type of the
>> operands.
>
> Out of the data I mentioned, they encoded at most the number of operands?
Right.
>> Of the three syntaxes I described above, the first had operators
>> always before the first operand, and the operator encoded the number of
>> operands.
>
> I got that one.
>
>> The second had the operator (still encoding the number of
>> operands) freely ordered relative to the operands, but encoded on
>> the operand its relation to the operator).
>
> I have trouble thinking that would work.
I meant "its ordering relative to the operator" i.e. precedes/follows.
> It should be possible to feed an operand into several different operator-
> types.
> Also it should be possible to feed an operand into more than one position
> in some operator-types.
>
> It seems unworkable to code, on the operand, which operators it can feed
> and/or which positions of them it can feed.
>
> Otherwise the word "dog" in the phrase "red dog" would be a separate
> lexicon entry from the word "dog" in the phrase "dog and cat".
>
>> And in the current, all operators have two operands and follow their
>> operands, so only simple operatorhood needs to be encoded.
>
> That's a little weird for a different reason; aren't there some naturally
> and necessarily unary operators?
My 'operators' link semantic operators/predicates to semantic
arguments/operands.
> Mathematically I think it's provable (but not by me!) that you don't need
> n-ary operators for n>4; and of course most natlangs get by just fine
> without n-ary operators for n>3, and many of them get by without n-ary
> operators for n>2.
I'd be fascinated if you could point me to some online reference for that
"no arity greeater than 4" claim, which is new to me. I can't see how
there could possibly be an upper limit on the number of arguments a
predicate has.
In Livagian, primitive predicates have an arity of 1, 2 or 3. But you can
simulate a greater arity by making one of the arguments a variable-predicate
whose sense is bound by the main predicate. E.g. if X is your basic
predicate and has 3 args, and Y is the variable-predicate, you can have
x1, x2, x3-y, y1, y2, y3, giving an effective arity of 5.
Also, there is no upper limit to the arity of complex predicates formed
from predicates. E.g. if X and Y each have 3 arguments, then the complex
predicate "X and Y" has up to 6 arguments. (In Livagian, this is.)
--And.