ダウンロード数: 153

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
j.neucom.2021.12.076.pdf1.84 MBAdobe PDF見る/開く
タイトル: Region-Attentive Multimodal Neural Machine Translation
著者: Zhao, Yuting
Komachi, Mamoru
Kajiwara, Tomoyuki
Chu, Chenhui  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0001-9848-6384 (unconfirmed)
キーワード: Multimodal neural machine translation
Recurrent neural network
Self-attention network
Object detection
Semantic image regions
発行日: Mar-2022
出版者: Elsevier BV
誌名: Neurocomputing
巻: 476
開始ページ: 1
終了ページ: 13
抄録: We propose a multimodal neural machine translation (MNMT) method with semantic image regions called region-attentive multimodal neural machine translation (RA-NMT). Existing studies on MNMT have mainly focused on employing global visual features or equally sized grid local visual features extracted by convolutional neural networks (CNNs) to improve translation performance. However, they neglect the effect of semantic information captured inside the visual features. This study utilizes semantic image regions extracted by object detection for MNMT and integrates visual and textual features using two modality-dependent attention mechanisms. The proposed method was implemented and verified on two neural architectures of neural machine translation (NMT): recurrent neural network (RNN) and self-attention network (SAN). Experimental results on different language pairs of Multi30k dataset show that our proposed method improves over baselines and outperforms most of the state-of-the-art MNMT methods. Further analysis demonstrates that the proposed method can achieve better translation performance because of its better visual feature use.
著作権等: © 2022 The Authors. Published by Elsevier B.V.
This is an open access article under the Creative Commons Attribution 4.0 International license.
URI: http://hdl.handle.net/2433/267428
DOI(出版社版): 10.1016/j.neucom.2021.12.076
出現コレクション:学術雑誌掲載論文等

アイテムの詳細レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons