јA hci.ucsd.edu /publications/20/ ./hci.ucsd.edu/publications/20/index.html hci.ucsd.edu /publications/ ./hci.ucsd.edu/publications/20/index.html.z x =№=c џџџџ џџџџџџџџџџџџџџџџШ РаМ Y OK text/html utf-8 gzip @( Y џџџџџџџџ я0 Wed, 05 Oct 2022 20:59:40 GMT В 0T3 pX3 X3 З5 PT3 <№=c ( Y
Communication between people is inherently multimodal. People employ speech, facial expressions, eye gaze, and gesture, among other facilities, to support communication and cooperative activity. Complexity of communication increases when a person is without a modality such as hearing, often resulting in dependence on another person or an assistive device to facilitate communication. This paper examines communication about medical topics through Shared Speech Interface, a multimodal tabletop display designed to assist communication between a hearing and deaf individual by converting speech-to-text and representing dialogue history on a shared interactive display surface. We compare communication mediated by a multimodal tabletop display and by a human sign language interpreter. Results indicate that the multimodal tabletop display (1) allows the deaf patient to watch the doctor when she is speaking, (2) encourages the doctor to exploit multimodal communication such as co-occurring gesture-speech, and (3) provides shared access to persistent, collaboratively produced representations of conversation. We also describe extensions of this communication technology, discuss how multimodal analysis techniques are useful in understanding the affects of multiuser multimodal tabletop systems, and brieяЌy allude to the potential of applying computer vision techniques to assist analysis.