A new quantum tunneling particle swarm optimization algorithm for training feedforward neural networks
Автор: Geraldine Bessie Amali. D., Dinakaran. M.
Журнал: International Journal of Intelligent Systems and Applications @ijisa
Статья в выпуске: 11 vol.10, 2018 года.
Бесплатный доступ
In this paper a new Quantum Tunneling Particle Swarm Optimization (QTPSO) algorithm is proposed and applied to the training of feedforward Artificial Neural Networks (ANNs). In the classical Particle Swarm Optimization (PSO) algorithm the value of the cost function at the location of the personal best solution found by each particle cannot increase. This can significantly reduce the explorative ability of the entire swarm. In this paper a new PSO algorithm in which the personal best solution of each particle is allowed to tunnel through hills in the cost function analogous to the Tunneling effect in Quantum Physics is proposed. In quantum tunneling a particle which has insufficient energy to cross a potential barrier can still cross the barrier with a small probability that exponentially decreases with the barrier length. The introduction of the quantum tunneling effect allows particles in the PSO algorithm to escape from local minima thereby increasing the explorative ability of the PSO algorithm and preventing premature convergence to local minima. The proposed algorithm significantly outperforms three state-of-the-art PSO variants on a majority of benchmark neural network training problems.
Particle Swarm Optimization algorithm, Quantum Tunneling, Artificial Neural Networks, Global Optimization, Nelder Mead, Feedforward Neural Networks
Короткий адрес: https://sciup.org/15016545
IDR: 15016545 | DOI: 10.5815/ijisa.2018.11.07
Список литературы A new quantum tunneling particle swarm optimization algorithm for training feedforward neural networks
- A. Roy and M.M. Noel, “Design of a high speed line following robot that smoothly follows tight curves”, Computers and Electrical Engineering, vol. 56, pp. 732-747, 2016.
- F .M. Ham and I. Kostanic., Principles of Neurocomputing for Science and Engineering, vol.1, McGraw-Hill Companies, New York, N.Y, 2001.
- M. M. Noel, “A new gradient based particle swarm optimization algorithm for accurate computation of global minimum”, Applied Soft Computing, vol. 12 (1), pp. 353-359, 2012.
- F. N. Sibai, H. I. Hosani, R.. M. Naqbi, S. Dhanhani and S. Shehhi., “Iris Recognition using artificial neural networks”, Expert Systems with Applications, vol. 38 (5), pp. 5940–5946, 2011.
- K. Smith and J. Gupta., “Continuous Function Optimisation via Gradient Descent on a Neural Network Approximation Function”, In: Proc. Of International Work Conference Artificial Neural Networks, Berlin, Hieldelberg, pp. 741-748, 2001.
- E. Adawy, “A SOFT-backpropagation algorithm for training neural networks”, In: Proc. Of the Nineteenth National Radio Science Conference, Alexandria, Egypt, pp. 397-404, 2002.
- M. Gori and A. Tesi, “On the problem of local minima in backpropagation”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.14 (1), pp.76-86, 1992.
- V.M. Nakarajan and M. M. Noel, “Galactic Swarm Optimization: A new global optimization metaheuristic inspired by galactic motion”, Applied Soft Computing, vol. 38, pp.771-787, 2016.
- G. Amali and M. Dinakaran,.,” Solution of the nonlinear least squares problem using a new gradient based genetic algorithm”, ARPN journal of engineering and applied sciences, vol. 11 (21), pp. 12876-12881, 2016.
- D. Whiteley, T. Starkweather and C. Bogart, “Genetic Algorithms and neural Networks: Optimizing Connections and Connectivity”, Parallel Computing, vol. 14, pp. 347-361.
- R. E. Zi-wu and S. A. Ye, “Improvement of Real-valued Genetic Algorithm and Performance Study”, Acta Electronica Sinica., vol. 2, pp. 0-17, 2007.
- Oliker, M. Furst and O. Maimon “Design architectures and training of neural networks with a distributed genetic algorithm”, In: Proc. Of IEEE International Conference on Neural Networks, San Francisco, USA, pp.199-202, 1993.
- S. G. Mendivil, O. Castillo and P. Melin, “Optimization of artificial neural network architectures for time series prediction using parallel genetic algorithms”, Soft Computing for Hybrid Intelligent System, vol. 154, pp. 387-399, 2008.
- Gudise and G. K. Venayagamoorthy, “Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks”, In: Proc. of the 2003 IEEE in Swarm Intelligence Symposium, Indianapolis, USA, pp.110-117, 2003.
- R.C.Eberhart and Y.Shi, “Comparison between Genetic Algorithms and Particle Swarm Optimization”, Evolutionary Programming VII, Lecture Notes in Computer Science, vol.1447, pp. 611-616, 1998.
- X. Chen, J. Wang, D. Sun and J. Liang, “A novel hybrid Evolutionary Algorithm based on PSO and AFSA for feedforward neural network training”, In: Proc. Of 4th International Conference on Wireless Communications, Networking and Mobile Computing, Dalian, China, pp. 1-5, 2008.
- J. Chen, J. Zheng, P. Wu, L. Zhang, and Q. Wu,., “Dynamic particle swarm optimizer with escaping prey for solving constrained non-convex and piecewise optimization problems”, Expert Systems with Applications, vol. 86, pp. 208-223, 2017.
- M.M.Noel and T.C.Janette, “A new continuous optimization algorithm based on sociological model”, In: Proc. of American Control Conference, Portland, USA, pp. 237-242, 2005.
- A. Conde,., A. Arriandiaga, J.A Sanchez, E. Portillo, S. Plaza and I. Cabanes, “High-accuracy wire electrical discharge machining using artificial neural networks and optimization techniques”. Robotics and Computer-Integrated Manufacturing, vol. 49, pp. 24-38, 2018.
- G.P. Singh and A. Singh, “Comparative study of krill herd, firefly and cuckoo search algorithms for unimodal and multimodal optimization”, International journal of intelligent systems and applications, vol. 6(3), pp. 35-49, 2014.
- G. Amali and V. Vijayarajan, “Accurate solution of benchmark linear programming problems using hybrid particle swarm optimization (PSO) algorithms”, International journal of applied engineering research, vol.10 (4), pp. 9101-9110, 2015.
- Y. Zhang., D. W. Gong, X. Y. Sun and Y. N. Guo, Y., “A PSO-based multi-objective multi-label feature selection method in classification”. Scientific Reports, vol. 7, pp. 1- 12, 2017.
- J.J. Liang and P.N. Suganthan PN, “Dynamic multi-swarm particle swarm optimizer”, In : Proc. of Swarm Intelligence Symposium, Pasadena, USA, pp. 124-129, 2005.
- D.G.B. Amali and M. Dinakaran, “A review of heuristic global optimization based artificial neural network training approaches”, International journal of pharmacy and technology, vol. 8(4), pp. 21670-21679, 2016.
- S. Jiang, K.S. Chin, K, L. Wang, G. Qu. and K. L. Tsui, “Modified genetic algorithm-based feature selection combined with pre-trained deep neural network for demand forecasting in outpatient department”, Expert Systems with Applications, vol. 82, pp. 216-230, 2017.
- J.J. Liang, A.K. Qin, P.N. Suganthan and S. Baskar., “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions”. IEEE transactions on evolutionary computation, vol. 10 (3), pp. 281-295, 2006.
- M. Nasir, S. Das, D. Maity, S. Sengupta, U. Halder and P. N, Suganthan ., “A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization”, Information Sciences, vol. 209, pp.16-36, 2012.
- W. Chagra, H. Degachi and Ksouri, “Nonlinear model predictive control based on Nelder Mead optimization method”, Nonlinear Dynamics, pp. 1-12, 2017.
- E. Zahara and Y. Kao, “Hybrid Nelder Mead simplex search and particle swarm optimization for constrained engineering design problems”, Expert Systems with Applications, vol. 36 (2), pp. 3880-3886, 2009.