Downloads: 251

Files in This Item:
File Description SizeFormat 
THMS.2014.2326873.pdf2.49 MBAdobe PDFView/Open
Title: Multiparty Interaction Understanding Using Smart Multimodal Digital Signage
Authors: Tung, Tony
Gomez, Randy
Kawahara, Tatsuya  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0002-2686-2296 (unconfirmed)
Matsuyama, Takashi  kyouindb  KAKEN_id
Author's alias: 河原, 達也
松山, 隆司
Issue Date: Oct-2014
Publisher: IEEE
Journal title: IEEE Transactions on Human-Machine Systems
Volume: 44
Issue: 5
Start page: 625
End page: 637
Abstract: This paper presents a novel multimodal system designed for multi-party human-human interaction analysis. The design of human-machine interfaces for multiple users is challenging because simultaneous processing of actions and reactions have to be consistent. The proposed system consists of a large display equipped with multiple sensing devices: microphone array, HD video cameras, and depth sensors. Multiple users positioned in front of the panel freely interact using voice or gesture while looking at the displayed content, without wearing any particular devices (such as motion capture sensors or head mounted devices). Acoustic and visual information is captured and processed jointly using established and state-of-the-art techniques to obtain individual speech and gaze direction. Furthermore, a new framework is proposed to model A/V multimodal interaction between verbal and nonverbal communication events. Dynamics of audio signals obtained from speaker diarization and head poses extracted from video images are modeled using hybrid dynamical systems (HDS). We show that HDS temporal structure characteristics can be used for multimodal interaction level estimation, which is useful feedback that can help to improve multi-party communication experience. Experimental results using synthetic and real-world datasets of group communication such as poster presentations show the feasibility of the proposed multimodal system.
Rights: © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。
URI: http://hdl.handle.net/2433/191011
DOI(Published Version): 10.1109/THMS.2014.2326873
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks


Export Format: 


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.