Theiling Online    Sitemap    Conlang Mailing List HQ    Attic   

Re: Small Derivational Idea

From:David J. Peterson <dedalvs@...>
Date:Wednesday, February 25, 2009, 6:11
On Feb 24, 2009, at 7∞31 PM, Alex Fink wrote:�>> I don't think that's the
best way to analyze that, but if you did,�>> formally�>> the two are now
distinct, and have little if any relation to one�>> another.��Total
sidenote: I try to keep my lines short and hit return at the end.�Why on
earth do my lines get broken up when quoting?��> Just to be on the same
page: what's "not the best way to analyze�> that" is�> calling them
allomorphs? (I wasn't especially meaning to support a�> morphemic analysis
there, just to describe what would result from�> one.)��At least with
your [swOpla] and [uBeli] example, I don't think I'd call�those allomorphs,
but, rather, the same prefix undergoing some sort�of sound change. This is
based on rather limited data, though.��>> In order for it to be a�>>
workable linguistic�>> framework, it must be constrained in some way, and
before I left�>> grad.�>> school a few colleagues were coming up with some
interesting ideas on�>> just how to do this. I don't think it will come to
anything, but�>> it's a start.�>�> I'd be interested to see that, if
their ideas panned out to any�> extent.��One thing that I'd really like
to see developed (and the person to�follow�here is James Kirby, who I think
is still at the University of�Chicago) is�how speaker knowledge is modeled
when there are a number of perfectly�acceptable alternatives (e.g. "cactuses"
vs. "cacti"). James was toying�with percentages based on a number of factors,
or assigning values�to various realizations to try to predict how likely it
was that a given�speaker would produce one or the other in a particular
situation.��>> Unfortunately, that really is the goal of formal
theoretical�>> linguistics: to�>> formally exclude that which can't exist,
while formally explaining�>> that�>> which can.�>�> Well, yeah. And I
expect there are many interesting insights to be�> had�> along those lines,
if you can get at how the human language faculty�> works;�> and there are
places where I think that goal is a perfectly good�> one. But�> I'm
skeptical about trying to do it in too all-encompassing a fashion.��So am
I.��> For one, and this is something like what I was saying before,
it's�> very�> binary, either you exclude a given possible grammar or
you�> implicitly bless�> it by not having done so. I'd like it more if it
was something�> more like�> describing a probability distribution on
possible languages, to�> recognise�> the fringe cases as fringe. (E.g. say
that, for some reason, the�> foonlitude�> of a natural language is a
(0,1)-normally distributed real�> variable, and so�> all of the languages
in our sample have foonlitude less than 4. This�> doesn't mean you should
look for a reason foonlitude >=4 is�> impossible...)��I think it's
completely silly, personally, the entire enterprise�(describing�existing
languages; excluding non-existing ones). I disagree with many�of the very
fundamental goals of linguistics, which is why I left.��> I think there'll
be some essentially hard constraints on human�> languages�> that arise from
properties of the underlying hardware and software,�> and then�> a good
deal of softer constraints that arise from other things, either�> softer
constraints on the language faculties ('you _can_ do X but�> it's a�> PITA
to process'), or probable patterns of historical change, or other�>
influences yet I don't foresee. And if you have to formalise�> everything
in�> or out, well then what about factors of the latter sort?��I'm pretty
much in agreement with you here. I think there should be�more investigation
into the basic constraints of human thought, and�then see how *those* apply
to language.��> Unless your alternative is something I hadn't thought of, I
did mean�> objectively simpler, in some sense that transcends whatever�>
language or�> culture we might happen to be using: I might restate this and
say�> that it�> requires fewer tokens of code to write a program that
catenates�> "men" to the�> end of a string than to do something with
allomorphs etc. etc.�> (Assuming�> our bases are represented as character
strings. Maybe this is the�> product�> of a cultural assumption, but then
it would seem to be a more deep-�> seated�> one...)��You're thinking
like a programmer. Not everyone thinks like that.��> Native language bias
is another thing, as is native grammatical�> tradition�> bias. Is English
itself susceptible enough to a morphemic analysis�> that it�> leads people
into this sort of thing, do you think? Or is it the way�> English is taught?
Or both, or neither?��There are certain ideas we have about English that
informs the way�we think language works. Basically, we're exposed to the
common�patterns of English, and just like the child that grows up
thinking�everyone is like them (e.g. all those accounts of Deaf children
who�are befuddled when they learn that there are people who don't sign,�and
think, naturally, that those individuals are stupid in some way),�if English
is all you get, it seems that language just kind of works�that way.��>
Oh, it shouldn't, not by any means; of course your example with the�>
[batol]�> and the [latob] was regular. That was just an extension of my�>
suggestion�> that it's easy --- if you want _everything_ to be regular and
easy�> to learn�> as a design principle, well then saying the word for A
and then�> saying the�> marker for B if you mean A marked for category B is
pretty easy.��Certainly. I don't know if it's the easiest, though.��On
Feb 24, 2009, at 9∞10 PM, Garth Wallace wrote:��> What about saying that
"Carlos" is a single morpheme, rather than�> three morphemes "Carl-o-s".
Gender is already lexically�> determined—replacing "os" with "a" gives you
a different word�> entirely—and that the diminutive is the "-it-" infix
before the rime�> of the final syllable in all of those cases?��Right,
that's another option. This, of course, goes against
the�standard�linguistic description of Spanish, which separates all the
-o/-a�ending words into stem and masculine/feminine suffix. I believe�all
morpheme-based analyses assume that (as do all non-morpheme-�based analyses
that I've seen), so a proponent of this analysis�would be fighting an uphill
battle.��> This doesn't address the�> "-(e)cito/a" form of the diminutive
for stems ending in other�> consonants, but it does seem to solve the
lookalike allomorph problem.��Also, regarding this and the above, why does
the natural diminutive�of "mamá" seem to be "mamacita", and not
"mamita"?��> Not sure what you're getting at with this example. Sure,
that's an�> easier way of describing that process, but does any language do
that?�> I'd be really surprised if any language had phoneme order reversal
as�> a regular, productive process. And so we're back to the problem of
not�> ruling out impossible things.��If you go back to Alex's message, we
were talking about a conlanger�who set out to create a simple and regular
language; we weren't�talking about natural language. The example I gave was
of a�simple and regular language that didn't use stems and suffixes,�not of
a possible natural
language.��-David�*******************************************************************�"A
male love inevivi i'ala'i oku i ue pokulu'ume o heki a."�"No eternal reward
will forgive us now for wasting the dawn."��-Jim
Morrison��http://dedalvs.conlang.org/�

Reply

Garth Wallace <gwalla@...>