SV: Re: Multimodal language (was: Wordless language (was: NonVerbal Conlang?))
From: | Kalle Bergman <seppu_kong@...> |
Date: | Tuesday, July 4, 2006, 12:13 |
> Actually, you've slightly misunderstood my aims,
> although understandably
> inasmuch as I didn't provide any implementation
> details.
Right ho! I am enlighted.
Yes, maybe the extra dimensions bestowed by
suprasegmental features can to some extent - although,
as you yourself admit, not fully - compensate for the
lack of spatial dimensions in spoken language. Of
course, this creates other problems; for instance, the
phonology will inevitably become very complex -
perhaps not an ideal property for an auxlang.
But hey, I'm not gonna be a negative nancy.
Engineering problems are there to be solved, not to be
stared at impotently ^_^
/Kalle B
--- Patrick Littell <puchitao@...> skrev:
> On 7/3/06, Kalle Bergman <seppu_kong@...>
> wrote:
> >
> >
> > If I have understood things correctly, this could
> be a
> > problem, since sign languages don't rely
> exclusively
> > on a sequential delivery of phonemes, as spoken
> > languages do.
>
>
> Actually, you've slightly misunderstood my aims,
> although understandably
> inasmuch as I didn't provide any implementation
> details.
>
> I'm not suggesting that we take, say, a spoken
> phonology and come up with a
> manual analogue of it. (And the grammar of Western
> European-style spoken
> language and come up with a manual version of that,
> too.) Like below, you
> noted that things like spelling out a spoken
> phonology and doing English in
> signs are too slow and cumbersome, and would put
> deaf users at a
> disadvantage. This is entirely true, but I wasn't
> suggesting anything so
> naive.
>
>
> > Whereas spoken language is
> > one-dimensional (phonemes are spoken one at a time
> in
> > a long "row"), sign language uses all three
> spatial
> > dimensions in addition to the time dimension used
> by
> > spoken language.
>
>
> Treating spoken language as one dimensional isn't
> wholly adequate,
> actually. We started out working with words as
> strings of phonemes, but the
> last 50 years have broadened our understanding of
> their nonlinear and
> suprasegmental properties. (Note: this doesn't mean
> that some language X
> can't be described in a wholly linear phonology, but
> rather that spoken
> languages have more options than just put this after
> this after this.
> English doesn't make much use of them, but they're
> there.)
>
> So even though sign phonology is indeed
> multidimensional, so is oral
> phonology. (This is not to say that the dimensions
> of sound space could
> make as many precise distinctions as body space
> does, just that a mapping
> between them need not completely impoverish sign's
> wealth of dimensional
> distinctions.)
>
> So let's take a practical example, dealing with one
> of the big stumbling
> points this project might come across: simultaneity.
> Spoken language does,
> as you say, mostly work by putting one thing after
> each other. But take a
> sign for "close it (the window) repeatedly". Quite
> a few things might be
> going on at once: the motion for closing, the
> classifier appropriate for
> windows, the movement to indicate habitual aspect.
>
> This superimposition is going to be one of the
> trickiest things to get right
> for a project like this. I don't suggest we leave
> things like this out
> entirely; the result would be, as you say, a very
> *unnatural* sign
> language. But on the other hand, working out a
> spoken language in which
> these simultaneously-occuring sounds are represented
> suprasegmentally will
> not lead us to an unnatural spoken language. It
> will lead us to a language
> very unlike English, yes, and possibly to a
> typologically improbable
> language, but not to something unspeakable.
>
> (As a side note, we often forget the wealth of
> features that might resonably
> be suprasegmental in a spoken language. The first
> thing that comes to mind
> for this project is tone, of course, but vowel
> quality features belong to
> more than one segment in languages with vowel
> harmony, consonant POA
> features in languages with consonant harmony, other
> features like voice or
> laryngealization... even *nasality* can be
> suprasegmental... for
> example, take a look at languages in the Yanomami or
> Macro-Je families.)
>
> -------------------------------
>
> Anyway, let's play around with this. I'll take
> David's analogy (his section
> on the similarities and differences between spoken
> and signed phonology is
> very good if you haven't yet read it) and go from
> there. Here's my
> specification of the spoken language:
>
> Verbs work by the sort of root-and-pattern
> morphology we find in Semitic
> languages, in which the root meaning of the verb is
> indicated by consonants
> and some of the vowels, and other vowels are left
> unspecified, to be filled
> in as part of aspectual inflection. The root for
> "close" is t*t*, where *
> is a vowel slot. The inflection for completive
> aspect involves filling it
> in with [a]s, whereas habitual aspect involves [ai]s
> instead, and
> progressive is [u], etc.
>
> Furthermore, the language exhibits
> classification-by-verbs, aka
> classificatory incorporation, aka type IV noun
> incorporation, although
> instead of doing this by affixation or compounding
> this is realized
> suprasegmentally by tone patterns. Say, a high-high
> pattern for flat
> things, a high-rising pattern for cylindrical
> things, a low-low pattern for
> roundish things, etc.
>
> In this case, "close it (the window) repeatedly"
> comes out as "taitai" with
> two high tones. (Note: although this language is
> absolutely nothing like
> English, this sort of game is not "unnatural" for
> speech. All of these are
> perfectly reasonable things for a spoken language to
> do.)
>
> Now, take something like David's mappings of
> consonants to discrete signs,
> vowels to movements, and handshape to tone. If the
> phoneme realized orally
> as [tj] is realized manually as lateral hand
> contact, and [ai] as a vertical
> circular movement, and the high-high tone
> corresponds to a flat handshape...
> then we get a case where a natural sign and a
> natural spoken word
> correspond. The sign in which one puts two flat
> hands in contact while
> moving in a vertical circle is just spoken as
> [tjaitjai] with HH tones.
>
> And negation might be realized manually as a
> head-shake, but orally as a
> [+NASAL] suprasegment over the entire word, giving
> us [njainjai]. (Sign
> language still wins against spoken when it comes to
> the number of things
> that can be happening at once, but spoken language
> does have usually-unused
> resources that at least help it catch up.)
>
> --------------------------------
>
> For instance; signers often place reocurring
> > concepts in their dialogue at a certain position
> in
> > the space around them, and when referring to that
> > concept, they indicate the position in question
> with
> > their hands (this is a kind of pronouns).
>
>
> This is so, and I'm not 100% sure we could find a
> way to implement this with
> sounds. It may just be a thing that gets left out
> in the end, just like
> subject agreement might be left out in order to
> allow classifiers instead.
>
> It wouldn't necessarily lead to an *unnatural*
> signed language, though, just
> as leaving out subject agreement or plural
> inflection doesn't lead to an
> impoverished spoken language. (On the other hand, I
> think that leaving out
> classification and aspect-as-movement might well
> lead to a unnatural sign
> language, so I think they should be on the short
> list of things-to-keep.)
>
>
> > I have the feeling that this property of sign
> language
> > would make it problematic to create a language
> whose
> > underlying representation can be realized as
> either
> > speech or signs - at least, and this is, granted,
> a
> > big "at least", if one wishes the language to
> behave
> > as a natural sign language.
> >
> > Of course, one can always create a inventory of
> signs
> > which corresponds to an inventory of spoken
> phonemes,
> > and which are suppposed to be used in a strict
> > sequential manner like the phonemes of spoken
> > language. This would then be reminiscent of
> "signed
> > english" - that is, the convention of assigning a
> hand
> > gesture to each letter in the english alphabet,
> used
> > to spell out english words in sign language. It is
> > illustrative, however, that signed english isn't
> used
> > as a native language by any community of deaf
> people.
> > The problem with signed english and similar
> > conventions is that they're simply too slow; one
> can
> > never achieve the same speed when signing letters
> as
> > when speaking, which everyone can easily convince
> > themselves of by looking up signed english on the
> web
> > and attempting to sign a few easy words as quick
> as
> > they can - and then comparing with the speed of
> saying
> > the words in question. Natural sign languages, on
> the
> > other hand, which use four dimensions instead of
> just
> > one, can easily compress their information rate to
> a
> > level equalling spoken language.
> >
> > If one aims to construct an auxlang which can be
> used
> > equally well by hearing and deaf, then this is the
> > wrong way to go; the hearing people would get an
> > obvious advantage, because the language would not
> > behave as a natural sign language.
>
>
> Entirely true. As I mentioned above, though, a sign
> language rigidly
> constrained to behave like a spoken language wasn't
> what I was going for.
> The game isn't to try to force one mode of
> communication into the mold of
> another, but to try to find those similarities that
> allow for natural
> communication within both. Not easy, definitely,
> but I do believe possible.
>
> -- Pat
>