Lexical Relatedness Morphology (wa Re: [Conlangs-Conf] Conference Overview)
|From:||David J. Peterson <dedalvs@...>|
|Date:||Sunday, May 7, 2006, 20:59|
I'd never heard of LRM (my excuse: I'm not a morphologist)
That's part of the problem: *no* one's heard of LRM. Maybe
fewer than 10 people in the US (though the number's growing).
Harry Bochner wrote this dissertation in 1988, and it was
published in 1993. And maybe because he didn't promote himself
very much, or for whatever reason, next to nobody read it.
Consequently, Bochner left the field. I think he's doing something
like computer programming in the service industry... Farrell
Ackerman dug out his dissertation and has been promoting it
coupled with the powerpoint for your talk (which shows up prominently
on a google for LRM)
Ha! That's news to me.
It's a very elegantly constructed presentation.
(One question, though: What does "the language is indestructible"
mean, and why is it a problem?)
As an example of this I gave my first language, Megdevi. Whenever
I encountered a problem (usually when translating something,
e.g. Shakespeare's The Tempest, which I started to translate), I
created a new morpheme--whether it be a root or an affix. Rather
than working with what I had, I just created a new form for
whatever meaning was desired. And whenever I heard of some
new morphological process on Conlang or from a linguistics
class, I'd just coin a new affix to add it in, without thinking about
whether it fit in with the nature of my language or not. The result
is my language really didn't have a nature. It was like a word-for-
word uber-language relex, where every sense and every notion
had a morpheme associated with it.
It'd be interesting to hear how the model might extend to handle
syntax and phonology, though. I have certain inklings how it could
apply to them, but can't imagine how it would handle them in their
See, that's where I'm at, too. I have ideas about how it could
work, but that's about it. This is a result of the relative obscurity
of LRM. If one is interested in seeing how Optimality Theory
applies to, say, syntax, semantics, pronoun resolution, historical
change, even, there's a slew of papers for each. Many of them
are exceedingly poor, but there's lots of people in linguistics working
on applying the theory to other realms, so even if the ideas don't
work, you can see that they don't work, and why they don't work
without putting in the legwork yourself. That's what's nice about
having an academic community behind a theory: the strengths
and weaknesses are discovered quickly and made known. I wish
LRM would become popular...
Yahya responding to what I wrote:
> -Where X and Y are in a systematic morphological relationship
> such that by looking at either X or Y, one can predict the meaning
> and form of the member, ...
Grrr... The *other* member. I leave out important words all
the time, nowadays... So if you see X, and no it takes place in a
pattern, then you can predict Y, both its meaning and its form.
Now, there's a hard definition to follow ...
Sorry; I was trying to think of suppletion technically. It's just
where the two words are totally different, e.g. "go" and "went"
(vs. "pet/petted", totally regular, and "find/found", where
you have fVnd in both).
But what is fundamental to
suppletion is the observation that a pattern has
That's a good point. Irregulars also break the main pattern (to
the extent that there can be a main pattern), but usually irregulars
fit into a pattern themselves, no matter how small. And also
there's some phonological relationship between the two elements.
Suppletion must necessarily not fit into a pattern, and the two
forms must bear no phonological relationship to each other.
Now what would be interesting was a *regular* pattern of
suppletion. In other words, in a language, with past, present
and future, the future form of a verb will always be suppletive--
so you can predict that its form won't be predictable. O.o
And the immediate problem with
allowing morphemes to have allomorphs is simply this:
Knowing one allomorph no longer has the same pre-
dictive power that knowing *the* morpheme does.
Right. And what's more, it seems to me that morpheme-based
theories aren't even interested in predictability. Like it's purposely
being ignored as uninteresting.
For the most part, I
agree that your base terms appear presently mean-
ingless; however, that does not mean that they were
so when the present words were first formed.
Of course, the modern-day good little linguist response is: "Yes,
but we're only concerned with the synchronic state of the language,
because a child has no knowledge of the history of their language."
In other words, a "well-founded" excuse for ignoring what seems
to be obvious.
This is what I like about LRM. Let's take "strawberry", for example.
When I was creating my handout, I included "strawberry" in my
examples along with "boysenberry", etc., as a word composed of
"berry" plus a meaningless (or unpredictable) prefix. My girlfriend
then pointed out to me that the "straw" in strawberry comes from
how strawberries are grown. I was flabbergasted. And furthermore,
the word changed for me forever that day. Now in my head I
have a story for "*straw*berry", whereas before, I had none. Yet,
this wasn't a problem, either before or after. All it did was change
my mental lexicon. So let's say for "berry" we have a pattern like
X <-> Xberry
X can mean anything, and is an adjective or noun, and Xberry
means "some kind of berry". LRM can model this knowledge,
with two (or three subpatterns):
X <-> Xberry
Where the meaning is something like "a berry grown by using
X". Then there's:
X <-> Xberry
Where the meaning is "a berry with the characteristic of X". Then
if the word doesn't fit that pattern, it goes into the basic pattern.
It doesn't matter what the lexical category of the element is (e.g.,
"mul": noun or adjective?): we still know that it's a kind of berry,
and that the word is separable into X and berry. Then, if we, say,
learn that "mul" is simply the color of a mulberry, then we can
recategorize the word so it fits in with the pattern above.
Most importantly, this is a synchronic analysis (i.e., it models a
modern-day speaker's actual knowledge) that can take account
of historical knowledge we acquire over time. The first rule up
there I think perfectly captures the idea that we know a word
like "huckleberry" is composed of two parts, and that we can
infer that whenever this word was coined "huckle" must have
meant *something*, but at the present time, a given speaker
doesn't know what it is--though it they're told, they can accomodate
... because the derivation clearly proceeds by
inserting an infix /-it-/, not a suffix /-it/ as
you assert. If it did, we'd have instead:
Ha! Don't tell a linguist working on Spanish! No, no, /-it/ is
*not* an infix in a morpheme-based account of Spanish: it's a
suffix. A name like "Nachito" is /nach-/ + /-it/ + /-o/. The
same suffix applies to all words with that form in Spanish:
burro (burr- -o) burrito (burr- -it -o)
hermosa (hermos- -a) hermosita (hermos- -it -a)
flaco (flac- -o) flaquito (flac- -it -o)
I'd wager that if one were to suggest that /-it/ was an infix to
a linguist who works on Spanish they'd first laugh, and then if
one persisted, they'd bust out all their arguments about how
we know /-o/ and /-a/ are suffixes, and since an infix is something
that's inserted into the middle of a root, /-it/ has to be a suffix.
One thing I've never heard an account of, though, is why if /-it/
is a suffix it can never end a word. Perhaps it's a suffix with a
diacritic "I can't end a word"...?
Or, I'd rather say, a general pattern:
(1) XO <-> XitO
where X is the stem of the name, of
almost arbitrary phonological form, and
O is the ending, of form V[C], eg -a/-o/-os.
Yes, that'd be a way of formalizing the general pattern. Both
the subpatterns (with just -a and just -o) also exist. But, yes, if
you want to state really general patterns, you can use all the
feature specifications, etc. to flesh it all out.
David, I don't know whether you answered
And's question, but you certainly gave me
some food for thought - thanks!
Man, and if the linguists aren't going to work with this, why
not the conlangers? That'd be hilarious if we started presenting
"LRM applied to syntax" and "LRM applied to phonology" at
like the LSA, or something. It'd give my professor fits. ;)
"sunly eleSkarez ygralleryf ydZZixelje je ox2mejze."
"No eternal reward will forgive us now for wasting the dawn."