ダウンロード数: 53

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
ACCESS.2021.3121751.pdf2.1 MBAdobe PDF見る/開く
タイトル: Deep Adversarial Reinforcement Learning With Noise Compensation by Autoencoder
著者: Ohashi, Kohei
Nakanishi, Kosuke
Sasaki, Wataru
Yasui, Yuji
Ishii, Shin
著者名の別形: 大橋, 康平
中西, 康輔
佐々木, 航
石井, 信
キーワード: Deep reinforcement learning
adversarial learning
robustness
regularization
automatic vehicle control
発行日: 2021
出版者: Institute of Electrical and Electronics Engineers (IEEE)
誌名: IEEE Access
巻: 9
開始ページ: 143901
終了ページ: 143912
抄録: We present a new adversarial learning method for deep reinforcement learning (DRL). Based on this method, robust internal representation in a deep Q-network (DQN) was introduced by applying adversarial noise to disturb the DQN policy; however, it was compensated for by the autoencoder network. In particular, we proposed the use of a new type of adversarial noise: it encourages the policy to choose the worst action leading to the worst outcome at each state. When the proposed method, called deep Q-W-network regularized with an autoencoder (DQWAE), was applied to seven different games in an Atari 2600, the results were convincing. DQWAE exhibited greater robustness against the random/adversarial noise added to the input and accelerated the learning process more than the baseline DQN. When applied to a realistic automatic driving simulation, the proposed DRL method was found to be effective at rendering the acquired policy robust against random/adversarial noise.
著作権等: This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
URI: http://hdl.handle.net/2433/277586
DOI(出版社版): 10.1109/ACCESS.2021.3121751
出現コレクション:学術雑誌掲載論文等

アイテムの詳細レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons