Re: Communication methods for people with extremely limited articulation
From: | Sai Emrys <saizai@...> |
Date: | Friday, January 30, 2009, 22:02 |
On Fri, Jan 30, 2009 at 2:59 AM, Lars Finsen <lars.finsen@...> wrote:
>>> Would you prefer the language (or code) to be tied to English, or would
>>> you like a more independent communication system?
>>
>> It ought to be compatible with whatever L1; certainly at least English
>> and French.
>> I think the only difference would be in the mapping and frequency list
>> / phonotactics list.
>
> Sorry, but here I don't follow you. What is a 'mapping and frequency list'?
> Not sure about 'phonotactics list' either. Do you think the language should
> have different phonotactics in different L1 zones? Should it contain message
> parts that correspond (are mapped) to words in the various L1s?
Mapping: what productions (e.g. Morse code strings) map to what characters.
Frequency list: the order, from most to least common, of characters in
the language.
T9ish frequency list: the same, predicated on knowing the previous
characters in that word (i.e. a tree structure whose top level is the
basic freq list)
> I am thinking that this non-verbal language should have bigger atoms of
> information, expressing wider concepts than words, else it would be too slow
> and unwieldy. For example we could have one distinct signal for each of the
> main feelings, and means to modify them in order to go into more detail.
> Same with other concepts. For example, if you want food, simply open your
> mouth. If you want a hot dog, open your mouth and stick your tongue out.
> Etc...
I think it'd be helpful to have this to some extent, but only for a
very limited set of most important concepts.
Other than that, just instruct them to drop everything that's not
crucial, e.g. instead of "my foo hurts bad", just "hurts foo".
> Hmm, you are thinking of a system that consists half of a signalling system
> for urgent messages and half of a system to spell words in your L1. It makes
> sense to use a language that's already available. But this spelling would
> tend to be very slow. It would help if some T9-like electronic means were
> available. Of course, any interlocutors would gladly act like natural T9
> systems themselves, completing words for the patient. The patient then only
> needs a signal to agree or disagree with the suggested completion.
Correct, that's possible.
I'd like to avoid electronic assistants simply because they're
cumbersome and unlikely to be there; when you get to them, you start
going into the "well, we might as well add..." realm, and that's hard
to compete in.
I see two means of articulation:
1. patient-directed - they signal in some established protocol, same
as any other system (like Morse code, except probably with more
articulators than just one a/b/null thing)
2. other-directed - other person goes through a routine, patient only
needs to give appropriately timed boolean response
Both should take advantage of compression and pragmatics to shorten
the amount of content that needs to be transmitted to get the message
across, but should still use (more or less) the common L1, even if
it's somewhat pidginized.
Patient-directed would be more of a long-term learned solution,
other-directed is the faster to bootstrap version that's also slower
to transmit.
>> *nod* I'm not familiar with diabetic seizures. What communication
>> ability did they have? What degree of consciousness?
>
> The degree of consciousness was clearly rapidly diminishing. But the person
> stubbornly refused to admit that something was wrong. They said it was
> typical. If the brain stops working, I guess a means of communication isn't
> what you need the most, really.
FWIW, I've always been well aware of my own degree of consciousness.
But yeah, this system won't deal very well with that, except possibly
reverting to the (easier) other-directed style. It presumes full
sentience.
>> I'd rather that they be treated abstractly - e.g. "first binary
>> phoneme, second binary phoneme, first pointing phoneme [e.g. eye
>> gaze]" etc.
>
> Maybe it's a good idea, as patients have different degree of control over
> their head, toes, finger, etc.
Definitely. Some may e.g. only have 'move or no move'; it may come
with a few seconds of noise after each articulation; it may or may not
have control over degree or direction; etc.
> So how many phonemes (which mostly aren't exactly "phon"-emes) should we
> reckon that we have, actually?
A variable number. That's the kicker. ;-)
> I think you are referring to the sequence of instructions to nurses or other
> people who deals with the person who has limited articulation. Are you?
Yes.
> Have you gotten any others interested in the project yet? I'm surprised that
> there aren't any others interested.
Only Alex (per his message).
I emailed one researcher who's dealing with a patient with LIS, but he
appears to be exclusively focused on a BCI-based sound creation system
(having the patient train to create formants and thus sounds), and
doesn't seem to understand the idea I presented at all even after
clarification.
Great stuff, mind, but it's classic stereotypical blinkeredness to
one's own work. :-/
- Sai