このアイテムのアクセス数: 342

このアイテムのファイル:
ファイル 記述 サイズフォーマット 
j.compag.2020.105499.pdf1.18 MBAdobe PDF見る/開く
完全メタデータレコード
DCフィールド言語
dc.contributor.authorLi, Yangen
dc.contributor.authorIida, Michihisaen
dc.contributor.authorSuyama, Tomoyaen
dc.contributor.authorSuguri, Masahikoen
dc.contributor.authorMasuda, Ryoheien
dc.contributor.alternative李, 楊ja
dc.contributor.alternative飯田, 訓久ja
dc.contributor.alternative壽山, 智也ja
dc.contributor.alternative村主, 勝彦ja
dc.contributor.alternative増田, 良平ja
dc.date.accessioned2020-07-03T06:06:48Z-
dc.date.available2020-07-03T06:06:48Z-
dc.date.issued2020-07-
dc.identifier.issn0168-1699-
dc.identifier.urihttp://hdl.handle.net/2433/252404-
dc.description.abstractConvolutional neural networks (CNNs) are the current state of the art systems in image semantic segmentation (SS). However, because it requires a large computational cost, it is not suitable for running on embedded devices, such as on rice combine harvesters. In order to detect and identify the surrounding environment for a rice combine harvester in real time, a neural network using Network Slimming to reduce the network model size, which takes wide neural networks as the input model, yielding a compact model (hereafter referred to as “pruned model”) with comparable accuracy, was applied based on an image cascade network (ICNet). Network Slimming performs channel-level sparsity of convolutional layers in the ICNet by imposing L1 regularization on channel scaling factors with the corresponding batch normalization layer, which removes less informative feature channels in the convolutional layers to obtain a more compact model. Then each of the pruned models were evaluated by mean intersection over union (IoU) on the test set. When the compaction ratio is 80%, it gives a 97.4% reduction of model volume size, running 1.33 times faster with comparable accuracy as the original model. The results showed that when the compaction ratio is less than 80%, a more efficient (less computational cost) model with a slightly reduced accuracy in comparison to the original model was achieved. Field tests were conducted with the pruned model (80% compaction ratio) to verify the performance of obstacle detection. Results showed that the average success rate of collision avoidance was 96.6% at an average processing speed of 32.2 FPS (31.1 ms per frame) with an image size of 640 × 480 pixels on a Jetson Xavier. It shows that the pruned model can be used for obstacle detection and collision avoidance in robotic harvesters.en
dc.format.mimetypeapplication/pdf-
dc.language.isoeng-
dc.publisherElsevier B.V.en
dc.rights© 2020. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/.en
dc.rightsThe full-text file will be made open to the public on 1 July 2022 in accordance with publisher's 'Terms and Conditions for Self-Archiving'.en
dc.rightsこの論文は出版社版でありません。引用の際には出版社版をご確認ご利用ください。ja
dc.rightsThis is not the published version. Please cite only the published version.en
dc.subjectRobotic combine harvesteren
dc.subjectDeep learningen
dc.subjectHuman detectionen
dc.subjectImage cascade networken
dc.subjectNetwork slimmingen
dc.titleImplementation of deep-learning algorithm for obstacle detection and collision avoidance for robotic harvesteren
dc.typejournal article-
dc.type.niitypeJournal Article-
dc.identifier.jtitleComputers and Electronics in Agricultureen
dc.identifier.volume174-
dc.relation.doi10.1016/j.compag.2020.105499-
dc.textversionauthor-
dc.identifier.artnum105499-
dc.addressGraduate School of Agriculture, Kyoto Universityen
dc.addressGraduate School of Agriculture, Kyoto Universityen
dc.addressGraduate School of Agriculture, Kyoto Universityen
dc.addressGraduate School of Agriculture, Kyoto Universityen
dc.addressGraduate School of Agriculture, Kyoto Universityen
dcterms.accessRightsopen access-
datacite.date.available2022-07-01-
出現コレクション:学術雑誌掲載論文等

アイテムの簡略レコードを表示する

Export to RefWorks


出力フォーマット 


このリポジトリに保管されているアイテムはすべて著作権により保護されています。