Residue Number System Based Convolution Neural Network Algorithm

Loading...
Thumbnail Image
Date
2022-06-20
Journal Title
Journal ISSN
Volume Title
Publisher
International Journal of Nature And Science Advance Research
Abstract
A number of positively convergent elements have aided the development of Deep Learning. The efficiency of floating point operations is highly optimized in modern microarchitectures. A whole area of research has emerged around quantized models, which reduce by orders of magnitude the amount of required memory, with a particular focus on quantized convolution neural networks. However, there is still a need to rethink how these quantized models can then be accelerated efficiently. The research starts by recognizing that inference in convolution neural networks is fundamentally a Digital Signal Processing (DSP) task. Memory, computation, and power utilization were expensive, in a different but similar way to how they are on modern mobile platforms, whose budget is set by their battery capacity. It is therefore of utmost importance to provide an alternative solution to this problem. This research introduces Residue Number System Architecture to the process to take advantage of the limited - but not binary - range of values that the operands can assume during the convolution operation in a
Description
Keywords
Citation