A gripping language, and a question about suprasegmental analysis (WAS: re: conlanging partners)
|From:||Sai Emrys <sai@...>|
|Date:||Sunday, November 23, 2008, 2:41|
On Fri, Nov 21, 2008 at 11:48 AM, Sai Emrys <sai@...> wrote:
> recently we started figuring out how we might be able to make a conlang entirely mediated
> by touch (of the sort where we could talk to each other discretely,
> masked by normal behavior like holding hands).
So, we discussed this again more recently.
To specify the domain better, the language we're trying to make should be:
* able to communicate simple and maybe meta* things (doesn't need to
be capable of Shakespeare or neuroscience)
* communicable entirely by the speakers' hands being grasped together
or the like (as is socially normal for couples in most situations -
though I'd like to expand this to other forms of casual touch also)
By "meta", I mean that the grip-language may occur in parallel to an
ongoing, and separately sensible, acoustic language - and would act as
some sort of meta-commentary to it in real time.
First, one thing came up that's a philosophical? question of analysis.
One phonetic feature of the domain is that the primary two grips
(opposite hands gripping, thumbs same direction, palms together,
fingers interlaced) are symmetrically asymmetric - A's thumb is either
outside or inside B's.
Switching between these two grips (let's call them A or B dominant
based on whose thumb is on the outside) is a relatively elaborate
cascade or disengage-reëngage process, thus seems like something that
would not be done frequently.
Alex's analogy for this was to vowel harmony & suprasegmental features
more generally, which I think is apt.
The question is, does one analyze the words [k2r2m] vs [korom] as:
a) being phonemically /k2r2m/ vs /korom/, with an non-semantic rule
that vowels are supposed to be frontness-harmonic, or
b) being phonemically both /k$r$m/ where $ signifies a mid rounded
vowel, frontness unspecified, and frontness is a separate bit property
of the whole word
Another example from ASL is hand dominance. E.g. HELP is dominant hand
/A/ resting on base /B/; dominance is a non-phonological property in
ASL (except in explicitly visual-spatial context). One could however
analyze this as actually being two distinct signs, left A on right B
vs right A on left B, with some handwaving about some signers
preferring one over another form, but being allophonic.
However, suppose that I were to create ASL', in which using reverse
dominance to one's true dominance carries ironic pragma. How then
would one analyze it - as being a feature of each phone, of each
phoneme, of each "word" (granted that 'word' is a bit ambiguous in
ASL), or of a sentence / utterance overall? At some level it is
specified, and at the levels below that it is not.
My preference is to analyze this sort of thing as being a bit
"belonging to" the level at which it changes meaning - so if e.g.
[k2r2m] vs [korom] is cat vs dog, then that's to the word itself; if
it's ironic vs normal then it's to the utterance overall (unless it's
just that word that's emphasizedly ironic, in which case the word
again); and if yet it's indicative of deferential vs superior
politeness marking, then certainly to the entire utterance or even the
I'd be interested to read y'alls' thoughts on this.
Second, we made a preliminary pass at enumerating the phonological
inventory. This is divided into a few semi-parallelized channels:
* grip: A-dominant, B-dominant; possibly other variants also, not
* thumb disposition: default, dominant thumb under sub thumb
(sub-dominant?), and dominant pointer over sub thumb (double
- I do double dominant by leaving dom thumb as is, and just moving dom
pointer over the tip of sub thumb, in a somewhat side-by-side
* disposition transitions: short-short, short-stroke, or stroke-* (I
found stroke-stroke and stroke-short to be too hard to reliably do
- short = minimal contact w/ other finger except as needed to transition
- stroke = stroke up or down other finger during that segment of the transition
- 1..5th knuckle press (coded by recipient's knuckle, thumb = 1st)
- 1..4th gap press (1st gap = thumb web)
- 1..4th short gap press (gap press is made w/ finger extended, short
gap press w/ finger pad pulled back to be against the fleshier bit)
- ? 1..5th finger squeeze (coded by squeezer's lower-ordinal squeezing
finger, e.g. dom 1st squeeze = squeeze sub thumb w/ thumb & pointer)
- ? finger separation (only possible from double-dominant grip)
- ? some subset of the combinations thereof
* elbow-dominance (walking hand-in-hand, dominant elbow is in front)
* torsion (?neutral, dominant out, and dominant in - e.g. dominant out
has the whole dominant thumb base outside the sub thumb)
- ? possibly these can be characterized as motions instead of states
Some possible issues with the domain:
* for me (though not for Alex), fourth and fifth finger action is not
entirely seperable (so there will be noise between the two)
* we have different grip dominance preference (interlace your fingers
together - which way do you prefer? I like my right thumb dominant, he
likes left), so one of us is always a bit awkward with a grip
* Alex dislikes the double dominant position for being too squeezy,
for making thumb usefulness worse, and magnifying grip asymmetry
* thumb disposition and grip both significantly affect the motions one
can do, and the perception of them; one issue e.g. is whether to code
recipient xor presser finger as phonological
Anyone done similar?
Any languages for deaf-blind worth stealing from (e.g. that aren't
just some originally-for-sighted sign language done using recipient
hands to feel the signer's)?