Re: Multimodal language (was: Wordless language (was: NonVerbal Conlang?))
From: | Kalle Bergman <seppu_kong@...> |
Date: | Monday, July 3, 2006, 8:30 |
Hi
> If I were an auxlanger, point #1 on my manifesto
> would be a phonology in
> which each underlying representation can surface
> either orally or manually.
> That is, some underlying form /xyz/ could either
> surface as, say, [kai] or
> as [thumb-touching-nose] or, preferably, both at
> once.
If I have understood things correctly, this could be a
problem, since sign languages don't rely exclusively
on a sequential delivery of phonemes, as spoken
languages do. Whereas spoken language is
one-dimensional (phonemes are spoken one at a time in
a long "row"), sign language uses all three spatial
dimensions in addition to the time dimension used by
spoken language.
While the morphology and syntax of a spoken language
depend exclusively on placing certain morphemes in a
certain order, the grammar of a sign language depend
to an equal part on how the morphemes are arranged in
SPACE. For instance; signers often place reocurring
concepts in their dialogue at a certain position in
the space around them, and when referring to that
concept, they indicate the position in question with
their hands (this is a kind of pronouns).
I have the feeling that this property of sign language
would make it problematic to create a language whose
underlying representation can be realized as either
speech or signs - at least, and this is, granted, a
big "at least", if one wishes the language to behave
as a natural sign language.
Of course, one can always create a inventory of signs
which corresponds to an inventory of spoken phonemes,
and which are suppposed to be used in a strict
sequential manner like the phonemes of spoken
language. This would then be reminiscent of "signed
english" - that is, the convention of assigning a hand
gesture to each letter in the english alphabet, used
to spell out english words in sign language. It is
illustrative, however, that signed english isn't used
as a native language by any community of deaf people.
The problem with signed english and similar
conventions is that they're simply too slow; one can
never achieve the same speed when signing letters as
when speaking, which everyone can easily convince
themselves of by looking up signed english on the web
and attempting to sign a few easy words as quick as
they can - and then comparing with the speed of saying
the words in question. Natural sign languages, on the
other hand, which use four dimensions instead of just
one, can easily compress their information rate to a
level equalling spoken language.
If one aims to construct an auxlang which can be used
equally well by hearing and deaf, then this is the
wrong way to go; the hearing people would get an
obvious advantage, because the language would not
behave as a natural sign language.
/Kalle B
--- Patrick Littell <puchitao@...> skrev:
> If I were an auxlanger, point #1 on my manifesto
> would be a phonology in
> which each underlying representation can surface
> either orally or manually.
> That is, some underlying form /xyz/ could either
> surface as, say, [kai] or
> as [thumb-touching-nose] or, preferably, both at
> once. Each phonology would
> have to be simpler than in an oral-only or
> manual-only language, but for an
> IAL that's a feature rather than a bug.
>
> Some benefits:
>
> 1. Would greatly reduce the communication barrier
> between the deaf and the
> hearing.
> 2. Would provide additional confirmation of a word's
> identity for the
> hard-of-hearing.
> 3. Children could begin acquisition at an earlier
> age than with a purely
> spoken language
> 4. Unlike, say, an ASL signer having to use English
> to write in, it would
> not be necessary to learn a new language just to
> write.
> 5. Useful in situations in which conditions prevent
> easy oral
> communication. Like at a construction site, or
> underwater, or while
> housebreaking, or while your roommate is asleep, or
> at the dentist's.
> Actually, my first thought was, hey, this would be
> useful in a noisy bar.
>
> Anyway, I'm not an auxlanger, and not given to
> writing manifestos, but I
> thought I'd throw this idea out here. Has anyone
> tried to implement
> something like this?
>
> What other modalities could the underlying form
> surface as? Other than
> writing, which is the usual second mode. On that
> note...
>
> 6. If there were a correspondence between sign and
> written representation
> that was somewhat transparent, learning to read
> would become a lot easier.
> Say we use the Roman alphabet -- it is an IAL, after
> all -- and consonants
> are co-realized as handshapes. (David's analogy
> makes more sense, of
> course, but just for argument...) Say, further,
> that the /k/ sound is
> matched to a C handshape and /l/ to a flat palm,
> etc. Now the process of
> associating letters to sounds has gotten one step
> easier.
>
> Oh, and on a final note, I rather like the term
> Synaesthetic Language for
> something like the above.
>
> -- Pat
>
> On 7/2/06, David J. Peterson <dedalvs@...>
> wrote:
> >
> > Eldin wrote:
> > <<
> > Do Sign Languages constist of phones?
> > Are Sign Languages natlangs?
> > >>
> >
> > Yes and most definitely yes. There's been lots of
> work done
> > (fairly) recently on the phonology of sign
> languages. As an
> > analog, the place of a sign (its location in space
> and/or in relation
> > to the body) is similar to a consonant in spoken
> language; the
> > movement of a sign is similar to a vowel; and the
> handshape
> > one uses is similar to a tone (according to at
> least one theory).
> > I wrote an IPA for signed languages which may be a
> useful
> > introduction:
> >
> >
http://dedalvs.free.fr/slipa.html
> >
> > -David
> >
>
*******************************************************************
> > "A male love inevivi i'ala'i oku i ue pokulu'ume o
> heki a."
> > "No eternal reward will forgive us now for wasting
> the dawn."
> >
> > -Jim Morrison
> >
> >
http://dedalvs.free.fr/
> >
>
Reply