ダウンロード数: 244
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
1687-4722-2012-3.pdf | 2.5 MB | Adobe PDF | 見る/開く |
タイトル: | Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music |
著者: | Lim, Angelica Ogata, Tetsuya Okuno, Hiroshi G. |
著者名の別形: | 奥乃, 博 |
キーワード: | affective computing gesture entertainment robots |
発行日: | 17-Jan-2012 |
出版者: | SpringerOpen |
誌名: | EURASIP Journal on Audio, Speech, and Music Processing |
巻: | 2012 |
論文番号: | 3 |
抄録: | It has been long speculated that expression of emotions from different modalities have the same underlying 'code', whether it be a dance step, musical phrase, or tone of voice. This is the first attempt to implement this theory across three modalities, inspired by the polyvalence and repeatability of robotics. We propose a unifying framework to generate emotions across voice, gesture, and music, by representing emotional states as a 4-parameter tuple of speed, intensity, regularity, and extent (SIRE). Our results show that a simple 4-tuple can capture four emotions recognizable at greater than chance across gesture and voice, and at least two emotions across all three modalities. An application for multi-modal, expressive music robots is discussed. |
著作権等: | © 2012 Lim et al; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
URI: | http://hdl.handle.net/2433/187380 |
DOI(出版社版): | 10.1186/1687-4722-2012-3 |
出現コレクション: | 学術雑誌掲載論文等 |
このリポジトリに保管されているアイテムはすべて著作権により保護されています。