Downloads: 119

Files in This Item:
File Description SizeFormat 
transfun.E101.A.1092.pdf9.48 MBAdobe PDFView/Open
Title: Efficient Mini-batch Training on Memristor Neural Network Integrating Gradient Calculation and Weight Update
Authors: YAMAMORI, Satoshi
HIROMOTO, Masayuki  kyouindb  KAKEN_id
SATO, Takashi  kyouindb  KAKEN_id  orcid (unconfirmed)
Author's alias: 廣本, 正之
佐藤, 高史
Keywords: memoristor
neural network
mini-batch training
stochastic gradient descent
Issue Date: 1-Jul-2018
Publisher: 電子情報通信学会
Journal title: IEICE Transactions of Fundamentals on Electronics, Communications and Computer Sciences
Volume: E101-A
Issue: 7
Start page: 1092
End page: 1100
Abstract: We propose an efficient training method for memristor neural networks. The proposed method is suitable for the mini-batch-based training, which is a common technique for various neural networks. By integrating the two processes of gradient calculation in the backpropagation algorithm and weight update in the write operation to the memristors, the proposed method accelerates the training process and also eliminates the external computing resources required in the existing method, such as multipliers and memories. Through numerical experiments, we demonstrated that the proposed method achieves twice faster convergence of the training process than the existing method, while retaining the same level of the accuracy for the classification results.
Rights: © 2018 The Institute of Electronics, Information and Communication Engineers 許諾条件に基づいて掲載しています。
DOI(Published Version): 10.1587/transfun.E101.A.1092
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks

Export Format: 

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.