このアイテムのアクセス数: 675
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
TASLP.2019.2955858.pdf | 1.14 MB | Adobe PDF | 見る/開く |
タイトル: | Cross-Lingual Transfer Learning of Non-Native Acoustic Modeling for Pronunciation Error Detection and Diagnosis |
著者: | Duan, Richeng Kawahara, Tatsuya ![]() ![]() ![]() Dantsuji, Masatake ![]() Nanjo, Hiroaki ![]() ![]() |
著者名の別形: | 河原, 達也 壇辻, 正剛 南條, 浩輝 |
キーワード: | Speech and Hearing Media Technology Linguistics and Language Signal Processing Acoustics and Ultrasonics Instrumentation Electrical and Electronic Engineering |
発行日: | 2020 |
出版者: | Institute of Electrical and Electronics Engineers (IEEE) |
誌名: | IEEE/ACM Transactions on Audio, Speech, and Language Processing |
巻: | 28 |
開始ページ: | 391 |
終了ページ: | 401 |
抄録: | In computer-assisted pronunciation training (CAPT), the scarcity of large-scale non-native corpora and human expert annotations are two fundamental challenges to non-native acoustic modeling. Most existing approaches of acoustic modeling in CAPT are based on non-native corpora while there are so many living languages in the world. It is impractical to collect and annotate every non-native speech corpus considering different language pairs. In this work, we address non-native acoustic modeling (both on phonetic and articulatory level) based on transfer learning. In order to effectively train acoustic models of non-native speech without using such data, we propose to exploit two large native speech corpora of learner's native language (L1) and target language (L2) to model cross-lingual phenomena. This kind of transfer learning can provide a better feature representation of non-native speech. Experimental evaluations are carried out for Japanese speakers learning English. We first demonstrate the proposed acoustic-phone model achieves a lower word error rate in non-native speech recognition. It also improves the pronunciation error detection based on goodness of pronunciation (GOP) score. For diagnosis of pronunciation errors, the proposed acoustic-articulatory modeling method is effective for providing detailed feedback at the articulation level. |
著作権等: | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 |
URI: | http://hdl.handle.net/2433/246413 |
DOI(出版社版): | 10.1109/TASLP.2019.2955858 |
出現コレクション: | 学術雑誌掲載論文等 |

このリポジトリに保管されているアイテムはすべて著作権により保護されています。