ダウンロード数: 99

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
plantphenomics.0073.pdf20.21 MBAdobe PDF見る/開く
タイトル: Deep learning enables instant and versatile estimation of rice yield using ground-based RGB images
著者: Tanaka, Yu
Watanabe, Tomoya
Katsura, Keisuke  kyouindb  KAKEN_id
Tsujimoto, Yasuhiro
Takai, Toshiyuki
Tanaka, Takashi Sonam Tashi
Kawamura, Kensuke
Saito, Hiroki
Homma, Koki
Mairoua, Salifou Goube
Ahouanton, Kokou
Ibrahim, Ali
Senthilkumar, Kalimuthu
Semwal, Vimal Kumar
Matute, Edurado Graterol
Corredor, Edgar
El-Namaky, Raafat
Manigbas, Norbie L.
Quilang, Edurado Jimmy P.
Iwahashi, Yu  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0002-1291-6329 (unconfirmed)
Nakajima, Kota
Takeuchi, Eisuke
Saito, Kazuki
著者名の別形: 田中, 佑
渡邊, 智也
桂, 圭佑
辻本, 泰弘
高井, 俊之
田中, 貴
川村, 健介
齊藤, 大樹
本間, 香貴
岩橋, 優
中嶌, 洸太
竹内, 瑛祐
齋藤, 和樹
発行日: 28-Jul-2023
出版者: American Association for the Advancement of Science (AAAS)
誌名: Plant Phenomics
巻: 5
論文番号: 0073
抄録: Rice (Oryza sativa L.) is one of the most important cereals, which provides 20% of the world’s food energy. However, its productivity is poorly assessed especially in the global South. Here, we provide a first study to perform a deep-learning-based approach for instantaneously estimating rice yield using red-green-blue images. During ripening stage and at harvest, over 22, 000 digital images were captured vertically downward over the rice canopy from a distance of 0.8 to 0.9 m at 4, 820 harvesting plots having the yield of 0.1 to 16.1 t·ha⁻¹ across 6 countries in Africa and Japan. A convolutional neural network applied to these data at harvest predicted 68% variation in yield with a relative root mean square error of 0.22. The developed model successfully detected genotypic difference and impact of agronomic interventions on yield in the independent dataset. The model also demonstrated robustness against the images acquired at different shooting angles up to 30° from right angle, diverse light environments, and shooting date during late ripening stage. Even when the resolution of images was reduced (from 0.2 to 3.2 cm·pixel−1 of ground sampling distance), the model could predict 57% variation in yield, implying that this approach can be scaled by the use of unmanned aerial vehicles. Our work offers low-cost, hands-on, and rapid approach for high-throughput phenotyping and can lead to impact assessment of productivity-enhancing interventions, detection of fields where these are needed to sustainably increase crop production, and yield forecast at several weeks before harvesting.
記述: "AIの目"によるイネ収穫量の簡単・迅速推定. 京都大学プレスリリース. 2023-07-21.
著作権等: Copyright © 2023 Yu Tanaka et al.
Exclusive licensee Nanjing Agricultural University. No claim to original U.S. Government Works. Distributed under a Creative Commons Attribution License 4.0 (CC BY 4.0).
URI: http://hdl.handle.net/2433/284495
DOI(出版社版): 10.34133/plantphenomics.0073
関連リンク: https://www.kyoto-u.ac.jp/ja/research-news/2023-07-21
出現コレクション:学術雑誌掲載論文等

アイテムの詳細レコードを表示する

Export to RefWorks


出力フォーマット 


このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス Creative Commons