Sensory modalities in conlangs
From: | Sai Emrys <sai@...> |
Date: | Friday, March 9, 2007, 11:20 |
I'm watching a podcast from DEFCON 14 about NLP. It reminds me of
something I read of Suzette Elgin's (she's written some stuff about
communication styles, some for cops some more general).
Basically the idea is people communicate in three modes: visual,
audio, or tactile (or maybe even olfactory).
E.g. "I see what you mean"; "does that sound good to you"; "that feels
like a good idea". Etc etc etc.
The idea is that you match whatever mode the other person is using and
this will enable greater rapport. I'm not aware of any actual
empirical studies that confirm this hypothesis but maybe they exist.
(Side notes re NLP: Also match gestures, stance, jargon, posture,
volume, speed, etc; check for dilation, flush, mirroring; then probe
gently to see if they'll mirror your changes.
Another empirical claim (again dunno how well founded) is auditory
'people' look sideways, visual up/side, and tactile down/side when
doing recall. Or that up-right = visual construct, right = audio
construct, down-right = kinesthetic, up-left = visual recall, left =
auditory recall, down left = internal dialog. And backwards for
left-handed people. But this came with a claim that this is because
someone is "looking towards the part of the brain" that's being used,
so raises a bunch of "bullshit!" flags for me. I'd like to see solid
experimental results if anyone has 'em.)
My question is: (how) do you address this in your conlang(s)? What
words or phrases do you to use to represent emotions? What anatomical
analogies (e.g. Shakespearean 'liver' or 'bile')? Have you tried
making a totally sense-neutral system to discuss perception and
emotion? If yes how'd it work?
[Fill in more questions here to make it open. ;-)]
- Sai