Theiling Online    Sitemap    Conlang Mailing List HQ   

Re: Dublex (was: Washing-machine words (was: Futurese, Chinese,

From:And Rosta <a-rosta@...>
Date:Thursday, May 16, 2002, 1:08
Jeffrey:
> Raymond Brown <ray.brown@...> comunu:
[...]
> > At 6:08 am +0100 15/5/02, And Rosta wrote: > > [snip] > > >(unambigously analysable) compounds. I don't see much advantage in > > >a *regularized* rafsioid scheme of the sort you describe. Overall, I > > >think compounding is very overrated. > > > > I'm coming to that conclusion also. > > Them's fighting words. :-) Not really, of course, but since it flies in the > face of my whole experiment I have to ask you both why you think that > compounding is overrated.
Two main reasons. (1) Compounding is not the only alternative to creating a new and totally unanalysable root. There are various alternatives, including * arbitrary or quasi-systematic modification of an existing semantically related root or stem * derivational affixes * having very many roots, but organizing them into paradigms such that roots with related meaning have similar forms, possibly in a relatively systematic way -- Sometimes, these alternatives yield apter stems than compounding does. A compound X+Y is apt if the denotatum is X and is Y, or if (in a head-final compound) it is a Y a salient characteristic of which is saliently associated with X. But not all new concepts can be expressed by such apt compounds. (2) Compounding yields unnecessarily long stems. Suppose all roots are three segments long and that on average each segment can be followed by any of 12 other segments. That gives 1728 roots. Two segment roots would number 144. Now, if you rely solely on compounding, then all words but the 1728 root words with be at least 6 segments long, and many will be longer. Suppose words average 8 segments. That means that there's vastly more wordspace -- half a trillion, even counting words up to only 8 segments long -- than is actually needed to accommodate the total number of words needed in the language. If you care about concision -- and almost all language users do -- then you want words to be as short as possible. This would mean that after the 1728 3-segment words have been used up, the next 20736 words should have 4-segment forms. The specific number are used simply to illustrate the general point. Moving on to the general Dublex experiment, I don't really see anything magically special about roots. The inventory of a language's morphological or etymological roots tends to be rather accidental -- accidents of history. They don't represent semantic primitives or anything truly elemental to the cognitive structures underlying language. Hence although your Dublex goal interests me by virtue of being an engelanging exercise, its specific goal is not one I myself think worthwhile to the world at large. --And.

Reply

Jeffrey Henning <jeffrey@...>