ダウンロード数: 301

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
TCYB.2014.2313655.pdf495.86 kBAdobe PDF見る/開く
タイトル: Acceleration of reinforcement learning by policy evaluation using nonstationary iterative method.
著者: Senda, Kei  kyouindb  KAKEN_id
Hattori, Suguru
Hishinuma, Toru
Kohda, Takehisa
著者名の別形: 泉田, 啓
発行日: Dec-2014
出版者: IEEE
誌名: IEEE transactions on cybernetics
巻: 44
号: 12
開始ページ: 2696
終了ページ: 2705
抄録: Typical methods for solving reinforcement learning problems iterate two steps, policy evaluation and policy improvement. This paper proposes algorithms for the policy evaluation to improve learning efficiency. The proposed algorithms are based on the Krylov Subspace Method (KSM), which is a nonstationary iterative method. The algorithms based on KSM are tens to hundreds times more efficient than existing algorithms based on the stationary iterative methods. Algorithms based on KSM are far more efficient than they have been generally expected. This paper clarifies what makes algorithms based on KSM makes more efficient with numerical examples and theoretical discussions.
著作権等: © 2014 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。
This is not the published version. Please cite only the published version.
URI: http://hdl.handle.net/2433/192769
DOI(出版社版): 10.1109/TCYB.2014.2313655
PubMed ID: 24733037
出現コレクション:学術雑誌掲載論文等

アイテムの詳細レコードを表示する

Export to RefWorks


出力フォーマット 


このリポジトリに保管されているアイテムはすべて著作権により保護されています。