ダウンロード数: 48

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
ACCESS.2023.3308916.pdf1.72 MBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorKameko, Hirotakaen
dc.contributor.authorMurawaki, Yugoen
dc.contributor.authorMatsuyoshi, Suguruen
dc.contributor.authorMori, Shinsukeen
dc.contributor.alternative亀甲, 博貴ja
dc.contributor.alternative村脇, 有吾ja
dc.contributor.alternative森, 信介ja
dc.date.accessioned2023-10-13T05:53:57Z-
dc.date.available2023-10-13T05:53:57Z-
dc.date.issued2023-
dc.identifier.urihttp://hdl.handle.net/2433/285524-
dc.description.abstractRecognizing event factuality is a crucial factor for understanding and generating texts with abundant references to possible and counterfactual events. Because event factuality is signaled by modality expressions, identifying modality expression is also an important task. The question then is how to solve these interconnected tasks. On the one hand, while neural networks facilitate multi-task learning by means of parameter sharing among related tasks, the recently introduced pre-training/fine-tuning paradigm might be powerful enough for the model to be able to learn one task without indirect signals from another. On the other hand, ever-increasing model sizes make it practically difficult to run multiple task-specific fine-tuned models at inference time so that parameter sharing can be seen as an effective way to reduce the model’s size. Through experiments, we found: (1) BERT-CRF outperformed non-neural models and BiLSTM-CRF; (2) BERT-CRF did neither benefit from nor was negatively impacted by multi-task learning, indicating the practical viability of BERT-CRF combined with multi-task learning.en
dc.language.isoeng-
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.rightsThis work is licensed under a Creative Commons Attribution 4.0 License.en
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subjectTask analysisen
dc.subjectMultitaskingen
dc.subjectTaggingen
dc.subjectAnnotationsen
dc.subjectTraining dataen
dc.subjectOnline servicesen
dc.subjectNeural networksen
dc.subjectEvent detectionen
dc.subjectLabelingen
dc.subjectSequential analysisen
dc.subjectEvent factualityen
dc.subjectmodalityen
dc.subjectsequence labelingen
dc.subjectneural networksen
dc.subjectmulti-task learningen
dc.titleJapanese Event Factuality Analysis in the Era of BERTen
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleIEEE Accessen
dc.identifier.volume11-
dc.identifier.spage93286-
dc.identifier.epage93292-
dc.relation.doi10.1109/ACCESS.2023.3308916-
dc.textversionpublisher-
dcterms.accessRightsopen access-
datacite.awardNumber18K11427-
datacite.awardNumber19K20341-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-18K11427/-
datacite.awardNumber.urihttps://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-19K20341/-
dc.identifier.eissn2169-3536-
jpcoar.funderName日本学術振興会ja
jpcoar.funderName日本学術振興会ja
jpcoar.awardTitle実世界と可能世界が参照可能であるテキストの日本語モダリティ解析ja
jpcoar.awardTitle音声対話による将棋の感想戦支援システムの構築ja
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons