if the path from non-verbal to abstract language communication were a communication protocol like TCP, what would the handshake look like? (this would compress 24 months of human cognitive/social development) Do any existing protocols use the same pattern for handshaking and negotiating meaning and building understanding? What about development of pidgins?

if the path from non-verbal to abstract language communication were a communication protocol like TCP, what would the handshake look like? (this would compress 24 months of human cognitive/social development) Do any existing protocols use the same pattern for handshaking and negotiating meaning and building understanding? Contrast with development of pidgins?

I’m hoping that I’ll eventually end up with a connection to delay tolerant protocols but first looking to see if anybody’s modeled human cognitive development in a communication protocol, either intentionally or accidentally.
===
 deafblind four ways:a. Person who is deafblind from birth or before developing language. This group is identified herein by congenital deafblindness. 2
b. Person who has become deafblind (acquired deafblindness). This is the most frequent type of deafblindness among adults and the elderly, who typically present appropriate cognitive and communicational development [12; 24]. In a Montreal study, 69% of deafblind people admitted to rehabilitation were at least 65 years old [26]. They rarely had total deafness and blindness. In Canada, the proportion of people 65 and over who have deafblindness is growing without cease. It was around 45% in 2005, according to statistics cited by Wittich, Watanabe & Gagné (2012).
c. Person deaf or presenting a hearing impairment from birth, losing vision. These individuals have acquired language while significantly using their visual abilities (oral or gestural language); in addition, they already know sign language [12]. Vision loss is therefore a big shock to them, as they are thereby losing their ability to read lips and see facial expressions and signs. Those who use sign language must make a transition to the tactile mode to continue to communicate. The Usher syndrome belongs in this profile. In a study by Wittich et al. (2012), it made up the second largest deafblindness diagnostic group (21 %). The deafness can be moderate or profound from birth, or progressive. The person also experiences a gradual reduction of the visual field and deterioration of night vision.
d. Person blind or presenting a visual impairment from birth, losing hearing. These people generally have developed good language abilities. Some have used Braille or the white cane for a long time. Loss of hearing information causes great difficulties in terms of functional autonomy and communication [12].
===
  1. • Voice amplification system.
  2. • Labial reading.
  • • Tactile speech reading (Tadoma method; Tactiling).
  1. • Sign language:
    1. Non adapted;
    2. Adapted (e.g. sign language in a narrow residual field of vision; in front of the face; tactile sign language, etc.).
  2. • Fingerspelling (spelling difficult words or proper nouns in space in a narrow visual field or tactilely).
  3. • Print-on-palm.
  • • Writing:
    1. Paper support with black marker;
    2. Dry erase board;
    3. Computer with increased font size onscreen or equipped with a screen reader, screen magnification software (e.g. ZoomText) or Braille display device;
    4. Fingerspelling and writing in the hand;
    5. Braille.
  • • Reference-to-object systems (e.g. presentation of an object for purposes of description, anticipation or reminder; communication board).
  1. • Pointing.
  2. • Technological aids:
    1. Braille writing and tactile systems (e.g. Braille note taker; TellaTouch, which allows the deafblind person to read in Braille the message produced by the interlocutor from the device’s standard keyboard; TeleBraille, etc.);
    2. Telephone systems for the deaf (visual screen) or deafblind (refreshable Braille display);
    3. Other technological systems (e.g. Light Writer).

=-=======

There’s a whole thing called “communication repair strategies”. Had no idea. Came across this gem and looked it up and indeed “communication repair strategies” is a thing – and significant too.
 
Conversation repair strategies –
check message comprehension;
repeat the message;
speak more clearly;
simplify the message (vocabulary, sentence structure);
change syntax or message structure itself;
provide non-verbal cues, show the task;
Give the person time to react.
 “Communication Repair” – love that phrase. From Behavioral Analysis but it describes in a quite simple way how babies teach us to communicate with them and they with us. Pets do it too. Computers don’t do it well just yet but they will.

The Prain et al. (2010) review of literature shows that many of these individuals never develop formal language and communicate rather by body movements, muscular tension, postures and gestures. They manifest stereotypical and idiosyncratic behaviour, meaning peculiar to each, and therefore their potential communication partners must be aware and skilled in interpretation. Even when living in a specialized residence, their personal interactions may be rare, as shown in the Prain et al. (2010) study. These researchers observed, in a specialized residence, normally occurring interactions between adults with congenital deafblindness and caregiving personnel. But the deafblind residents were very disengaged and their interactions with the personnel were rare [18].

===

 

“The partner should first discuss with the deafblind person the latter’s preferred communication mode, style and speed”
 
Sounds like dial-up modems which I’m glad for:
MODE:
STYLE: Characters (7 or 8 bit), Parity (even/odd/none), Stop Bits (1 or 2).
SPEED: 75 – 56K

SIMPLEX
HALF-DUPLEX
FULL DUPLEX

Half-duplex is usually preferred for clear communication in humans but we actually engage in full-duplex most of the time: thinking of our response while also listening at the same time. A true half-duplex would be “hanging onto every word” of the speaker/writer and not processing until finished receiving.
 
Since Modem’s modes were not as varied as human modes, I’ll be reading: “Communicating: The Multiple Modes of Human Communication” which includes non-verbal and animals and plants so looks good.
Box 2.2 Seven non-verbal modes of expression
Proxemics (structuring and using space to communicate)
Haptics (using touch to communicate)
Chronemics (using time)
Kinesics (visual aspects of bodily movement)
Physical appearance (carrying messages to others)
Vocalics (vocal as opposed to verbal aspects of speech)
Artefacts (both as message vehicle and influencing other codes)
(after Burgoon and Guerrero 1994: 125, 165–7)
———-
Box 2.1 ‘The five senses’
Sight
Hearing
Smell
Taste
Touch
————
Box 2.3 The multiple human intelligences
Linguistic
Musical
Logical-mathematical
Spatial (often, but not necessarily, visual)
Bodily-kinesthetic
Personal intelligences
(based on Gardner 1983)
—-
Box 2.4 Main channels for animal communication
Sight – visual channel
Sound – auditory channel
Smell – olfactory/chemical channel
Touch – tactile channel
Vibration – seismic channel
Electric fields – electrical channel
====
VIA:
“Communicating: The Multiple Modes of Human Communication” by Ruth Finnegan
====

Leave a comment

Your email address will not be published. Required fields are marked *


− two = 0

Leave a Reply