このアイテムのアクセス数: 6
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
s10015-024-00954-7.pdf | 761.22 kB | Adobe PDF | 見る/開く |
完全メタデータレコード
DCフィールド | 値 | 言語 |
---|---|---|
dc.contributor.author | Hirata, Satoshi | en |
dc.contributor.author | Sakai, Yutaka | en |
dc.contributor.alternative | 平田, 聡 | ja |
dc.date.accessioned | 2025-06-03T01:38:12Z | - |
dc.date.available | 2025-06-03T01:38:12Z | - |
dc.date.issued | 2024-08 | - |
dc.identifier.uri | http://hdl.handle.net/2433/294473 | - |
dc.description.abstract | Reinforcement learning is a mathematical framework for learning better choices through trial-and-error. Recent studies revealed that reinforcement learning is applicable to animal behavior and cognition. However, applying reinforcement learning to animal behavior sometimes encounters difficulties because the information sources utilized by animals to make choices are often unknown, whereas this is identified as the “state” in the reinforcement learning framework. We sought to identify possible state settings including non-standard formulations suitable for explaining data from past chimpanzee studies. Although chimpanzees' performance in a serial learning task was inconsistent with standard reinforcement learning formulations, we found that the combination of state-independent choice making and state-dependent evaluation produced consistent results. Exploration of state settings in reinforcement learning may shed new light on animal learning processes. | en |
dc.language.iso | eng | - |
dc.publisher | Springer Nature | en |
dc.rights | This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature's AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/s10015-024-00954-7 | en |
dc.rights | The full-text file will be made open to the public on 05 June 2025 in accordance with publisher's 'Terms and Conditions for Self-Archiving'. | en |
dc.rights | This is not the published version. Please cite only the published version. この論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。 | en |
dc.subject | Reinforcement learning | en |
dc.subject | Chimpanzee | en |
dc.subject | Serial learning | en |
dc.subject | Actor-Critic | en |
dc.title | Inferring source of learning by chimpanzees in cognitive tasks using reinforcement learning theory | en |
dc.type | journal article | - |
dc.type.niitype | Journal Article | - |
dc.identifier.jtitle | Artificial Life and Robotics | en |
dc.identifier.volume | 29 | - |
dc.identifier.spage | 398 | - |
dc.identifier.epage | 403 | - |
dc.relation.doi | 10.1007/s10015-024-00954-7 | - |
dc.textversion | author | - |
dcterms.accessRights | embargoed access | - |
datacite.date.available | 2025-06-05 | - |
datacite.awardNumber | 23H00494 | - |
datacite.awardNumber.uri | https://kaken.nii.ac.jp/grant/KAKENHI-PROJECT-23H00494/ | - |
dc.identifier.pissn | 1433-5298 | - |
dc.identifier.eissn | 1614-7456 | - |
jpcoar.funderName | 日本学術振興会 | ja |
jpcoar.awardTitle | 時間に関連した認知機能の進化的基盤を探る | ja |
出現コレクション: | 学術雑誌掲載論文等 |

このリポジトリに保管されているアイテムはすべて著作権により保護されています。