Digital Library

cab1

 
Title:      AN EFFICIENT, LOW RESOURCE, ARCHITECTURE FOR BACKPROPAGATION NEURAL NETWORKS
Author(s):      Pedro O. Domingos , Horácio C. Neto
ISBN:      972-99353-8-6
Editors:      João M. P. Cardoso
Year:      2005
Edition:      Single
Keywords:      ANN, Backpropagation, FPGA, Reconfigurable Computing.
Type:      Workshop Paper
First Page:      123
Last Page:      130
Language:      English
Cover:      no-img_eng.gif          
Full Contents:      click to dowload Download
Paper Abstract:      This paper describes the design and implementation, in reconfigurable hardware, of an efficient, low resource, architecture for an artificial neural network (ANN) with supervised learning capabilities. Neural networks are artificial systems that try to emulate the brain's cognitive behavior. They are mainly parallel computational systems, organized in clusters of neural processors, which can learn tasks with some degree of complexity such as optimization problems, data mining and text and speech recognition. A broad range of special purpose hardware architectures has been proposed in the past to speed-up the execution of neural network algorithms. However, most of these do not provide on-line learning capabilities, because of the implementation complexity of the backpropagation pass. This work proposes a new architecture, targeted to low resources FPGAs, which successfully unrolls the learning algorithm to fully expose the bit-level parallelism available in the backpropagation pass. Further, the use of bit-serial operators improves the efficiency of the implementation, while minimizing the hardware resources required. A series of experiments has been performed using practical examples and results are shown for an optical character recognition (OCR) task, which indicate that the architecture proposed can achieve significant improvements over previously proposed methods.
   

Social Media Links

Search

Login