このアイテムのアクセス数: 91

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
j.csl.2022.101469.pdf985.73 kBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorYamamoto, Kentaen
dc.contributor.authorInoue, Kojien
dc.contributor.authorKawahara, Tatsuyaen
dc.contributor.alternative山本, 賢太ja
dc.contributor.alternative井上, 昂治ja
dc.contributor.alternative河原, 達也ja
dc.date.accessioned2023-04-18T02:40:49Z-
dc.date.available2023-04-18T02:40:49Z-
dc.date.issued2023-04-
dc.identifier.urihttp://hdl.handle.net/2433/281689-
dc.description.abstractCharacter of spoken dialogue systems is important not only for giving a positive impression of the system but also for gaining rapport from users. We have proposed a character expression model for spoken dialogue systems. The model expresses three character traits (extroversion, emotional instability, and politeness) of spoken dialogue systems by controlling spoken dialogue behaviors: utterance amount, backchannel, filler, and switching pause length. One major problem in training this model is that it is costly and time-consuming to collect many pair data of character traits and behaviors. To address this problem, semi-supervised learning is proposed based on a variational auto-encoder that exploits both the limited amount of labeled pair data and unlabeled corpus data. It was confirmed that the proposed model can express given characters more accurately than a baseline model with only supervised learning. We also implemented the character expression model in a spoken dialogue system for an autonomous android robot, and then conducted a subjective experiment with 75 university students to confirm the effectiveness of the character expression for specific dialogue scenarios. The results showed that expressing a character in accordance with the dialogue task by the proposed model improves the user’s impression of the appropriateness in formal dialogue such as job interview.en
dc.language.isoeng-
dc.publisherElsevier BVen
dc.rights© 2022 The Authors. Published by Elsevier Ltd.en
dc.rightsThis is an open access article under the CC BY-NC-ND license.en
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectSpoken dialogue systemen
dc.subjectCharacteren
dc.subjectSemi-supervised learningen
dc.subjectVariational auto-encoder (VAE)en
dc.titleCharacter expression for spoken dialogue systems with semi-supervised learning using Variational Auto-Encoderen
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleComputer Speech & Languageen
dc.identifier.volume79-
dc.relation.doi10.1016/j.csl.2022.101469-
dc.textversionpublisher-
dc.identifier.artnum101469-
dcterms.accessRightsopen access-
datacite.awardNumber19H05691-
datacite.awardNumber20J22284-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PLANNED-19H05691/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-20J22284/-
dc.identifier.pissn0885-2308-
dc.identifier.eissn1095-8363-
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.awardTitle人間との対話継続及び関係構築のための対話知能システムja
jpcoar.awardTitle対話タスク・ユーザに適したキャラクタを表現する音声対話システムja
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons