јAhci.ucsd.edu/publications/20/./hci.ucsd.edu/publications/20/index.htmlhci.ucsd.edu/publications/./hci.ucsd.edu/publications/20/index.html.zx=№=cџџџџџџџџџџџџџџџџџџџџШРаМYOKtext/htmlutf-8gzip@(—Yџџџџџџџџ„я“0Wed, 05 Oct 2022 20:59:40 GMT –В0T3pX3X3З5PT3<№=c( Y DCog-HCI Lab @ UCSD - Publications

A.M. Piper and J. Hollan. Analyzing Multimodal Communication around a Shared Tabletop Display. Proceedings of European Conference on Computer-Supported Cooperative Work (ECSCW) 2009, 283-302.

Abstract

Communication between people is inherently multimodal. People employ speech, facial expressions, eye gaze, and gesture, among other facilities, to support communication and cooperative activity. Complexity of communication increases when a person is without a modality such as hearing, often resulting in dependence on another person or an assistive device to facilitate communication. This paper examines communication about medical topics through Shared Speech Interface, a multimodal tabletop display designed to assist communication between a hearing and deaf individual by converting speech-to-text and representing dialogue history on a shared interactive display surface. We compare communication mediated by a multimodal tabletop display and by a human sign language interpreter. Results indicate that the multimodal tabletop display (1) allows the deaf patient to watch the doctor when she is speaking, (2) encourages the doctor to exploit multimodal communication such as co-occurring gesture-speech, and (3) provides shared access to persistent, collaboratively produced representations of conversation. We also describe extensions of this communication technology, discuss how multimodal analysis techniques are useful in understanding the affects of multiuser multimodal tabletop systems, and brieяЌ‚y allude to the potential of applying computer vision techniques to assist analysis.