Access count of this item: 562

Files in This Item:
File Description SizeFormat 
TASLP.2021.3120643.pdf4.7 MBAdobe PDFView/Open
Title: Flexibly Focusing on Supporting Facts, Using Bridge Links, and Jointly Training Specialized Modules for Multi-hop Question Answering
Authors: Alkhaldi, Tareq
Chu, Chenhui  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0001-9848-6384 (unconfirmed)
Kurohashi, Sadao  kyouindb  KAKEN_id
Author's alias: 褚, 晨翚
黒橋, 禎夫
Keywords: Bridge links
joint training
multi-hop question answering
supporting facts
transformer
Issue Date: Oct-2021
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Journal title: IEEE/ACM Transactions on Audio, Speech and Language Processing
Volume: 29
Start page: 3216
End page: 3225
Abstract: With the help of the detailed annotated question answering dataset HotpotQA, recent question answering models are trained to justify their predicted answers with supporting facts from context documents. Some related works train the same model to find supporting facts and answers jointly without having specialized models for each task. The others train separate models for each task, but do not use supporting facts effectively to find the answer; they either use only the predicted sentences and ignore the remaining context, or do not use them at all. Furthermore, while complex graph-based models consider the bridge/connection between documents in the multi-hop setting, simple BERT-based models usually drop it. We propose FlexibleFocusedReader (FFReader), a model that 1) Flexibly focuses on predicted supporting facts (SFs) without ignoring the important remaining context, 2) Focuses on the bridge between documents, despite not using graph architectures, and 3) Jointly learns predicting SFs and answering with two specialized models. Our model achieves consistent improvement over the baseline. In particular, we find that flexibly focusing on SFs is important, rather than ignoring remaining context or not using SFs at all for finding the answer. We also find that tagging the entity that links the documents at hand is very beneficial. Finally, we show that joint training is crucial for FFReader.
Rights: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。
URI: http://hdl.handle.net/2433/265879
DOI(Published Version): 10.1109/TASLP.2021.3120643
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks


Export Format: 


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.