Yasser Abd. Djawad
Department of Electronics Engineering, Universitas Negeri Makasssar, Makassar, 90223, Indonesia

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Design of Quantized Deep Neural Network Hardware Inference Accelerator Using Systolic Architecture Dary Mochamad Rifqie; Yasser Abd. Djawad; Faizal Arya Samman; Ansari Saleh Ahmar; M. Miftach Fakhri
Journal of Applied Science, Engineering, Technology, and Education Vol. 6 No. 1 (2024)
Publisher : PT Mattawang Mediatama Solution

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35877/454RI.asci2689

Abstract

This paper presents a hardware inference accelerator architecture of quantized deep neural networks (DNN). The proposed accelerator implements all computation in a quantize version of DNN including linear transformations like matrix multiplications, nonlinear activation functions such as ReLU, quantization and dequantization operation. The hardware accelerator of quantized DNN consists of matrix multiplication core which is implemented in systolic array architecture, and the QDR core for computing the operation of quantization, dequantization, and ReLU. This proposed hardware architecture is implemented in Verilog Hardware Description Language (HDL) code using modelsim. To validate, we simulated the quantized DNN using Python programming language and compared the results with our proposed hardware accelerator. The result of this comparison shows a very slight difference, confirming the validity of our quantized DNN hardware accelerator.