このアイテムのアクセス数: 91
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
j.csl.2022.101469.pdf | 985.73 kB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Yamamoto, Kenta | en |
dc.contributor.author | Inoue, Koji | en |
dc.contributor.author | Kawahara, Tatsuya | en |
dc.contributor.alternative | 山本, 賢太 | ja |
dc.contributor.alternative | 井上, 昂治 | ja |
dc.contributor.alternative | 河原, 達也 | ja |
dc.date.accessioned | 2023-04-18T02:40:49Z | - |
dc.date.available | 2023-04-18T02:40:49Z | - |
dc.date.issued | 2023-04 | - |
dc.identifier.uri | http://hdl.handle.net/2433/281689 | - |
dc.description.abstract | Character of spoken dialogue systems is important not only for giving a positive impression of the system but also for gaining rapport from users. We have proposed a character expression model for spoken dialogue systems. The model expresses three character traits (extroversion, emotional instability, and politeness) of spoken dialogue systems by controlling spoken dialogue behaviors: utterance amount, backchannel, filler, and switching pause length. One major problem in training this model is that it is costly and time-consuming to collect many pair data of character traits and behaviors. To address this problem, semi-supervised learning is proposed based on a variational auto-encoder that exploits both the limited amount of labeled pair data and unlabeled corpus data. It was confirmed that the proposed model can express given characters more accurately than a baseline model with only supervised learning. We also implemented the character expression model in a spoken dialogue system for an autonomous android robot, and then conducted a subjective experiment with 75 university students to confirm the effectiveness of the character expression for specific dialogue scenarios. The results showed that expressing a character in accordance with the dialogue task by the proposed model improves the user’s impression of the appropriateness in formal dialogue such as job interview. | en |
dc.language.iso | eng | - |
dc.publisher | Elsevier BV | en |
dc.rights | © 2022 The Authors. Published by Elsevier Ltd. | en |
dc.rights | This is an open access article under the CC BY-NC-ND license. | en |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | - |
dc.subject | Spoken dialogue system | en |
dc.subject | Character | en |
dc.subject | Semi-supervised learning | en |
dc.subject | Variational auto-encoder (VAE) | en |
dc.title | Character expression for spoken dialogue systems with semi-supervised learning using Variational Auto-Encoder | en |
dc.type | journal article | - |
dc.type.niitype | Journal Article | - |
dc.identifier.jtitle | Computer Speech & Language | en |
dc.identifier.volume | 79 | - |
dc.relation.doi | 10.1016/j.csl.2022.101469 | - |
dc.textversion | publisher | - |
dc.identifier.artnum | 101469 | - |
dcterms.accessRights | open access | - |
datacite.awardNumber | 19H05691 | - |
datacite.awardNumber | 20J22284 | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PLANNED-19H05691/ | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-20J22284/ | - |
dc.identifier.pissn | 0885-2308 | - |
dc.identifier.eissn | 1095-8363 | - |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.awardTitle | 人間との対話継続及び関係構築のための対話知能システム | ja |
jpcoar.awardTitle | 対話タスク・ユーザに適したキャラクタを表現する音声対話システム | ja |
出現コレクション: | 学術雑誌掲載論文等 |

このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス