このアイテムのアクセス数: 108
このアイテムのファイル:
ファイル | 記述 | サイズ | フォーマット | |
---|---|---|---|---|
OJCAS.2024.3482469.pdf | 3.02 MB | Adobe PDF | 見る/開く |
タイトル: | Double MAC on a Cell: A 22-nm 8T-SRAM-Based Analog In-Memory Accelerator for Binary/Ternary Neural Networks Featuring Split Wordline |
著者: | Tagata, Hiroto Sato, Takashi ![]() ![]() ![]() Awano, Hiromitsu ![]() ![]() ![]() |
著者名の別形: | 田形, 寛斗 佐藤, 高史 粟野, 皓光 |
キーワード: | Quantized neural network (QNN) analog computing-in-memory (CIM) static random access memory (SRAM) voltage-mode accumulation multiply-and-accumulation (MAC) |
発行日: | 2024 |
出版者: | Institute of Electrical and Electronics Engineers (IEEE) |
誌名: | IEEE Open Journal of Circuits and Systems |
巻: | 5 |
開始ページ: | 328 |
終了ページ: | 340 |
抄録: | This paper proposes a novel 8T-SRAM based computing-in-memory (CIM) accelerator for the Binary/Ternary neural networks. The proposed split dual-port 8T-SRAM cell has two input ports, simultaneously performing two binary multiply-and-accumulate (MAC) operations on left and right bitlines. This approach enables a twofold increase in throughput without significantly increasing area or power consumption, since the area overhead for doubling throughput is only two additional WL wires compared to the conventional 8T-SRAM. In addition, the proposed circuit supports binary and ternary activation input, allowing flexible adjustment of high energy efficiency and high inference accuracy depending on the application. The proposed SRAM macro consists of a 128×128 SRAM array that outputs the MAC operation results of 96 binary/ternary inputs and 96×128 binary weights as 1-5 bit digital values. The proposed circuit performance was evaluated by post-layout simulation with the 22-nm process layout of the overall CIM macro. The proposed circuit is capable of high-speed operation at 1 GHz. It achieves a maximum area efficiency of 3320 TOPS/mm2, which is 3.4× higher compared to existing research with a reasonable energy efficiency of 1471 TOPS/W. The simulated inference accuracies of the proposed circuit are 96.45%/97.67% for MNIST dataset with binary/ternary MLP model, and 86.32%/88.56% for CIFAR-10 dataset with binary/ternary VGG-like CNN model. |
著作権等: | © 2024 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. |
URI: | http://hdl.handle.net/2433/291061 |
DOI(出版社版): | 10.1109/OJCAS.2024.3482469 |
出現コレクション: | 学術雑誌掲載論文等 |

このアイテムは次のライセンスが設定されています: クリエイティブ・コモンズ・ライセンス