このアイテムのアクセス数: 70

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
2024SW004121.pdf9 MBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorLiu, Pengen
dc.contributor.authorYokoyama, Tatsuhiroen
dc.contributor.authorSori, Takuyaen
dc.contributor.authorYamamoto, Mamoruen
dc.contributor.alternative劉, 鵬ja
dc.contributor.alternative横山, 竜宏ja
dc.contributor.alternative惣宇利, 卓弥ja
dc.contributor.alternative山本, 衛ja
dc.date.accessioned2024-12-19T02:27:15Z-
dc.date.available2024-12-19T02:27:15Z-
dc.date.issued2024-12-
dc.identifier.urihttp://hdl.handle.net/2433/290911-
dc.description.abstractThe spatiotemporal distribution of Total Electron Content (TEC) in ionosphere determines the refractive index of electromagnetic wave leading to the radio signal scintillation and deterioration. Thanks to the development of machine learning for video prediction, spatiotemporal predictive models are applied on the future TEC map prediction based on the graphic features of past frames. However, output result of graphic prediction is unable to properly respond to the external factor variations such as solar or geomagnetic activity. Meanwhile, there is still neither standard data -set nor comprehensive evaluation framework for spatiotemporal predictive learning of TEC map sequences leading to the comparisons unfair and insights inconclusive. In this research, a new feature-level multimodal fusion method named as channel mixer layer for machine reasoning is proposed that can be embedded into the existing advanced spatiotemporal sequence prediction models. Meanwhile, all performance benchmarks are accomplished on the same running environment and newly proposed largest scale data set. Experiment results suggest that the multimodal fusion prediction of existing model backbones by proposed method improves the prediction accuracy up to 15% with almost the same computational complexity compared to that of graphic prediction without auxiliary factors input, having the real-time inference speed of 34 frames/second and minimum mean absolute error of 0.94/2.63 TEC unit during low/high solar activity period respectively. The channel mixer layer embedded models can respond to the variations of auxiliary external factors more correctly than previous multimodal fusion methods such as concatenation and arithmetic, which is regarded as the evidence of state-of-the-art machine reasoning ability.en
dc.language.isoeng-
dc.publisherAmerican Geophysical Union (AGU)en
dc.rights© 2024. The Author(s).en
dc.rightsThis is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.en
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/-
dc.subjectmultimodal fusionen
dc.subjectmachine reasoningen
dc.subjectspatiotemporal predictive learningen
dc.subjectionosphereen
dc.subjectTotal Electron Contenten
dc.subjectdeep learningen
dc.titleChannel Mixer Layer: Multimodal Fusion Toward Machine Reasoning for Spatiotemporal Predictive Learning of Ionospheric Total Electron Contenten
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleSpace Weatheren
dc.identifier.volume22-
dc.identifier.issue12-
dc.relation.doi10.1029/2024SW004121-
dc.textversionpublisher-
dc.identifier.artnume2024SW004121-
dcterms.accessRightsopen access-
dc.identifier.eissn1542-7390-
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons