Comparative Study of High Speed Back- Propagation Learning Algorithms

Автор: Saduf, Mohd.Arif Wani

Журнал: International Journal of Modern Education and Computer Science (IJMECS) @ijmecs

Статья в выпуске: 12 vol.6, 2014 года.

Бесплатный доступ

Back propagation is one of the well known training algorithms for multilayer perceptron. However the rate of convergence in back propagation learning tends to be relatively slow, which in turn makes it computationally excruciating. Over the last years many modifications have been proposed to improve the efficiency and convergence speed of the back propagation algorithm. The main emphasis of this paper is on investigating the performance of improved versions of back propagation algorithm in training the neural network. All of them are assessed on different training sets and a comparative analysis is made. Results of computer simulations with standard benchmark problems such as XOR, 3 BIT PARITY, MODIFIED XOR and IRIS are presented. The training performance of these algorithms is evaluated in terms of percentage of accuracy, and convergence speed.

Еще

ANN, gain, momentum, error saturation, Local minima

Короткий адрес: https://sciup.org/15014712

IDR: 15014712

Список литературы Comparative Study of High Speed Back- Propagation Learning Algorithms

  • D.E. Rumelhart, G.E. Hinton, and R.J. Williams, "Learning internal representations by error propagation," Parallel Distributed Processing: Explorations in the Microstructure of Cognition (D. Rumelhart and J. McClelland, editors), pp 318-362, 1986.
  • D.E. Rumelhart, G.E. Hinton and R.J.Williams,"Learning representations by back-propagating errors",Nature,vol 323,pp 533-536,1986.
  • D.J. Swanston, J.M. Bishop and R.J. Mitchell, "Simple adaptive momentum, new algorithm for training multilayer perceptrons," Electronics Letters, 30(18), pp 1498 -1500, 1994.
  • C. Yu and B. Liu, "A backpropagation algorithm with adaptive learning rate and momentum coefficient," Proc. Int. Conf. on Neural Networks (IJCNN'02), vol 2, pp 1218-1223, 2002.
  • H.M. Shao, G.F. Zheng, "A new BP algorithm with adaptive momentum for FNNs training," Proc. WRI Global Congress on Intelligent Systems (GCIS'09), vol. 4, pp. 16–20, 2009.
  • S.H. Oh, "Improving the error back-propagation algorithm with a modified error functions,"IEEE Trans. Neural Networks 8 (3), pp 799-803, 1997.
  • S.C. Ng, S.H. Leung, A. Luk, "Fast and global convergent weight evolution algorithm based on the modified back-propagation,"IEEE International Conference on Neural Networks Proceedings,pp. 3004-3008, 1995.
  • A.V. Ooyen, B. Nienhuis, "Improving the learning convergence of the back propagation algorithm," Neural Networks, vol 5, pp 465-471, 1992.
  • H.M.Lee, C.M.Chen, T.C.Huang, "Learning efficiency improvement of back-propagation algorithm by error saturation prevention method,"Neurocomputing, vol 41, pp. 125-143, 2001.
  • Yam, J.Y. and Chow, T.W., "A Weight initialization method for improving training speed in Feedforward neural network," Neurocomputing, Vol. 30, pp. 219-232, 2000.
  • T. Masters, Practical Neural Network Recipes in C + + (Academic Press, Boston, 1993).
  • Nguyen and B. Widrow, "Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights," Proc. Internat. Joint Conf on Neural Networks, San Diego Vol. 3, pp 21-26, 1990.
  • X.G. Wang, Z. Tang, H. Tamura, M. Ishii, W.D. Sun, "An improved backpropagation algorithm to avoid the local minima problem,"Neurocomputing,vol 56,pp 455 – 460, 2004.
  • Y. Bai, H. Zhang, Y.Hao, "The performance of the backpropagation algorithm with varying slope of the activation function," Chaos, Solitons and Fractals, vol 40, pp 69–77, 2009.
  • N. M. Nawi, R. S. Ransing and M. R. Ransing, "A new method to improve the gradient based search direction to enhance the computational efficiency of back propagation based Neural Network algorithms," Proc.IEEE Second Asia International Conference on Modelling & Simulation, pp.546-551 DOI 10.1109/AMS.2008.70,2008.
  • M. Gori and A. Tesi, "On the problem of local minima in backpropagation," IEEE Trans. Pattern Anal. Mach.Intell. 14 (1) pp 76–86, 1992.
  • H. Ishibuchi, R. Fujioka, H. Tanaka, Neural networks that learn from fuzzy if-then rules, IEEE Trans. Fuzzy Syst. 1 (2), pp 85-97,1993.
  • Saduf, M. Arif Wani, "Comparative study of adaptive learning rate with momentum and resilient back propagation algorithms for neural net classifier optimization," International Journal of Distributed and Cloud Computing, vol 2, pp. 1-6, 2014.
Еще
Статья научная