ダウンロード数: 65
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
j.csl.2019.03.001.pdf | 832.79 kB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Zhao, Tianyu | en |
dc.contributor.author | Kawahara, Tatsuya | en |
dc.contributor.alternative | 河原, 達也 | ja |
dc.date.accessioned | 2019-04-16T00:26:21Z | - |
dc.date.available | 2019-04-16T00:26:21Z | - |
dc.date.issued | 2019-09 | - |
dc.identifier.issn | 0885-2308 | - |
dc.identifier.issn | 1095-8363 | - |
dc.identifier.uri | http://hdl.handle.net/2433/240842 | - |
dc.description.abstract | A dialog act represents the communicative function of an utterance in a conversation, and thus provides informative cues for understanding, managing, and generating dialog. While most spoken dialog systems process user input and system output at the turn level, a single turn can consist of multiple dialog acts in human conversations. Therefore, segmenting turn-level tokens into a meaningful dialog act unit is just as important as recognizing the dialog act. Towards joint segmentation and recognition of dialog acts, we propose an encoder–decoder model featuring joint coding and incorporate contextual information by means of an attentional mechanism. The proposed encoder–decoder outperforms other models in segmentation, and the application of attentions significantly reduces recognition error rates. By combining the encoder–decoder model with contextual attention, we achieve state-of-the-art performance in the joint evaluation of dialog act segmentation and recognition. | en |
dc.format.mimetype | application/pdf | - |
dc.language.iso | eng | - |
dc.publisher | Elsevier BV | en |
dc.rights | © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/. | en |
dc.rights | The full-text file will be made open to the public on 1 September 2021 in accordance with publisher's 'Terms and Conditions for Self-Archiving'. | en |
dc.rights | この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 | ja |
dc.rights | This is not the published version. Please cite only the published version. | en |
dc.subject | Spoken dialog system | en |
dc.subject | Spoken language understanding | en |
dc.subject | Dialog act segmentation | en |
dc.subject | Dialog act recognition | en |
dc.title | Joint dialog act segmentation and recognition in human conversations using attention to dialog context | en |
dc.type | journal article | - |
dc.type.niitype | Journal Article | - |
dc.identifier.jtitle | Computer Speech and Language | - |
dc.identifier.volume | 57 | - |
dc.identifier.spage | 108 | - |
dc.identifier.epage | 127 | - |
dc.relation.doi | 10.1016/j.csl.2019.03.001 | - |
dc.textversion | author | - |
dc.address | Graduate School of Informatics, Kyoto University | en |
dc.address | Graduate School of Informatics, Kyoto University | en |
dcterms.accessRights | open access | - |
datacite.date.available | 2021-09-01 | - |
出現コレクション: | 学術雑誌掲載論文等 |
このリポジトリに保管されているアイテムはすべて著作権により保護されています。