Access count of this item: 112

Files in This Item:
File Description SizeFormat 
1687-4722-2012-3.pdf2.5 MBAdobe PDFView/Open
Title: Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music
Authors: Lim, Angelica
Ogata, Tetsuya
Okuno, Hiroshi G.
Author's alias: 奥乃, 博
Keywords: affective computing
gesture
entertainment robots
Issue Date: 17-Jan-2012
Publisher: SpringerOpen
Journal title: EURASIP Journal on Audio, Speech, and Music Processing
Volume: 2012
Thesis number: 3
Abstract: It has been long speculated that expression of emotions from different modalities have the same underlying 'code', whether it be a dance step, musical phrase, or tone of voice. This is the first attempt to implement this theory across three modalities, inspired by the polyvalence and repeatability of robotics. We propose a unifying framework to generate emotions across voice, gesture, and music, by representing emotional states as a 4-parameter tuple of speed, intensity, regularity, and extent (SIRE). Our results show that a simple 4-tuple can capture four emotions recognizable at greater than chance across gesture and voice, and at least two emotions across all three modalities. An application for multi-modal, expressive music robots is discussed.
Rights: © 2012 Lim et al; licensee Springer.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
URI: http://hdl.handle.net/2433/187380
DOI(Published Version): 10.1186/1687-4722-2012-3
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks


Export Format: 


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.