Learning efficiency improvement of back-propagation algorithm by error saturation prevention method

Citation
Hm. Lee et al., Learning efficiency improvement of back-propagation algorithm by error saturation prevention method, NEUROCOMPUT, 41, 2001, pp. 125-143
Citations number
33
Language
INGLESE
art.tipo
Article
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEUROCOMPUTING
ISSN journal
0925-2312 → ACNP
Volume
41
Year of publication
2001
Pages
125 - 143
Database
ISI
SICI code
0925-2312(200110)41:<125:LEIOBA>2.0.ZU;2-Q
Abstract
Back-propagation (BP) algorithm is currently the most widely used learning algorithm in artificial neural networks. With proper selection of feed-forw ard neural network architecture, it is capable of approximating most proble ms with high accuracy and generalization ability. However, the slow converg ence is a serious problem when using this well-known BP learning algorithm in many applications. As a result, many researchers take effort to improve the learning efficiency of BP algorithm by various enhancements. In this re search, we consider that the error saturation (ES) condition, which is caus ed by the use of gradient descent method, will greatly slow down the learni ng speed of BP algorithm. Thus, in this paper, we will analyze the causes o f the ES condition in the output layer. An error saturation prevention (ESP ) function is then proposed to prevent the nodes in the output layer from t he ES condition. We also apply this method to the nodes in hidden layers to adjust the learning terms. By the proposed methods, we can not only improv e the learning efficiency by the ES condition prevention but also maintain the semantic meaning of the energy function. Finally, some simulations are given to show the workings of our proposed method. (C) 2001 Elsevier Scienc e B.V. All rights reserved.