このアイテムのアクセス数: 60
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
ACCESS.2023.3264855.pdf | 1.17 MB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Tashiro, Yuma | en |
dc.contributor.author | Awano, Hiromitsu | en |
dc.contributor.alternative | 粟野, 皓光 | ja |
dc.date.accessioned | 2023-10-03T04:08:26Z | - |
dc.date.available | 2023-10-03T04:08:26Z | - |
dc.date.issued | 2023-04-05 | - |
dc.identifier.uri | http://hdl.handle.net/2433/285273 | - |
dc.description.abstract | Modern deep learning algorithms comprise highly complex artificial neural networks, making it extremely difficult for humans to track their inference processes. As the social implementation of deep learning progresses, the human and economic losses caused by inference errors are becoming increasingly problematic, making it necessary to develop methods to explain the basis for the decisions of deep learning algorithms. Although an attention mechanism-based method to visualize the regions that contribute to steering angle prediction in an automated driving task has been proposed, its explanatory capability is low. In this paper, we focus on the fact that the importance of each bit in the activation value of a network is biased (i.e., the sign and exponent bits are weighted more heavily than the mantissa bits), which has been overlooked in previous studies. Specifically, this paper quantizes network activations, encouraging important information to be aggregated to the sign bit. Further, we introduce an attention mechanism restricted to the sign bit to improve the explanatory power. Our numerical experiment using the Udacity dataset revealed that the proposed method achieves a 1.14× higher area under curve (AUC) in terms of the deletion metric. | en |
dc.language.iso | eng | - |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en |
dc.rights | This work is licensed under a Creative Commons Attribution 4.0 License | en |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | - |
dc.subject | Self-driving | en |
dc.subject | explainable AI | en |
dc.subject | attention | en |
dc.subject | quantized neural network | en |
dc.title | Pay Attention via Quantization: Enhancing Explainability of Neural Networks via Quantized Activation | en |
dc.type | journal article | - |
dc.type.niitype | Journal Article | - |
dc.identifier.jtitle | IEEE Access | en |
dc.identifier.volume | 11 | - |
dc.identifier.spage | 34431 | - |
dc.identifier.epage | 34439 | - |
dc.relation.doi | 10.1109/ACCESS.2023.3264855 | - |
dc.textversion | publisher | - |
dcterms.accessRights | open access | - |
datacite.awardNumber | 21H03409 | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-21H03409/ | - |
dc.identifier.eissn | 2169-3536 | - |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.awardTitle | 未来予測技術で切り拓く疑似ゼロレイテンシ・テレイグジスタンス | ja |
出現コレクション: | 学術雑誌掲載論文等 |

このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス