Analysis of Parametric & Non Parametric Classifiers for Classification Technique using WEKA

Автор: Yugal kumar, G. Sahoo

Журнал: International Journal of Information Technology and Computer Science(IJITCS) @ijitcs

Статья в выпуске: 7 Vol. 4, 2012 года.

Бесплатный доступ

In the field of Machine learning & Data Mining, lot of work had been done to construct new classification techniques/ classifiers and lot of research is going on to construct further new classifiers with the help of nature inspired technique such as Genetic Algorithm, Ant Colony Optimization, Bee Colony Optimization, Neural Network, Particle Swarm Optimization etc. Many researchers provided comparative study/ analysis of classification techniques. But this paper deals with another form of analysis of classification techniques i.e. parametric and non parametric classifiers analysis. This paper identifies parametric & non parametric classifiers that are used in classification process and provides tree representation of these classifiers. For the analysis purpose, four classifiers are used in which two of them are parametric and rest of are non-parametric in nature.

Еще

Bayesian Net, Logistic Regression, Multi layer perceptron, and Navie Bayes

Короткий адрес: https://sciup.org/15011715

IDR: 15011715

Список литературы Analysis of Parametric & Non Parametric Classifiers for Classification Technique using WEKA

  • Sarle, Warren S. (1994), “Neural Networks and Statistical Models,” Proceedings of the Nineteenth Annual SAS Users Group International Conference, April, pp 1-13.
  • S.H.Musavi and M.Golabi (2008), “Application of Artificial Neural Networks in the River Water Quality Modeling: Karoon River,Iran”, Journal 0f Applied Sciences, Asian Network for Scientific Information, pp. 2324- 2328.
  • M.J. Diamantopoulou, V.Z. Antonopoulos and D.M. Papamichai (jan 2005), “The Use of a Neural Network Technique for the Prediction of Water Quality Parameters of Axios River in Northern Greece”, Journal 0f Operational Research, Springer-Verlag, pp. 115-125.
  • Buntine, W. (1991). Theory refinement on Bayesian networks. In B. D. D’Ambrosio, P. Smets, & P.P. Bonissone (Eds.), In Press of Proceedings of the Seventh Annual Conference on Uncertainty Artificial Intelligent (pp. 52-60). San Francisco, CA
  • Daniel Grossman and Pedro Domingos (2004). Learning Bayesian Network Classifiers by Maximizing Conditional Likelihood. In Press of Proceedings of the 21st International Conference on Machine Learning, Banff, Canada.
  • D.Marquardt (1963), “An Algorithm for Least Squares Estimation of Non-Linear Parameter”, J. Soc. Ind. Appl. Math., vol. 11 pp 431- 441.
  • L.Fausett (1994), “Fundamentals of Neural Networks Architecture.Algorithms and Applications”, Pearson Prentice Hall, USA.
  • Ian h.Written and Eibe Frank.Data Mining Practical Machine Learning Tools and Techniques, Second Edition,Elsevier.
  • Janikow, C. Z. (1998). "Fuzzy decision trees: issues and methods." IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 28(1): 1-14.
  • Dutton,D. & Conroy, G. (1996),”A review of machine learning”, Knowledge Engineering Review 12: 341-367.
  • De Mantaras & Armengol E. (1998),”Machine learning from example: Inductive and Lazy methods”, Data & Knowledge Engineering 25: 99-123.
  • J. R. Quinlan (1986), “Induction of decision trees”,Machine learning 1, pp 81-106.
  • J.R.Quinlan (1993),C 4.5: Programs for machine learning, morgan kaufmann,san francisco.
  • M. S. Hung, M. Hu, M. Shanker (2001),”Estimating breast cancer risks using neural network”, International journal of operational research52, 1-10.
  • R. Setiono, L. C. K. Hui (1995), “Use of quasi- Newton method in a feed forward neural network construction algorithm”, IEEE Trans. Neural Network 6( 1) pp. no. 273-277.
  • H. Ishibuchi, K. Nozaki, N. Yamamoto, H. Tanaka (1995), “Selecting fuzzy if then rules for classification roblems using genetic algorithm”, IEEE Trans. Fuzzy System 3 (3) 260 – 270.
  • D. B. Fogel, E. C. Wason , E. M. Boughton, V. W. Porto, P. J. Angeline (1998), “Linear and Neural Model for classifying breast masses”,IEEE Trans. On Medical imaging 17 (3) 485-488.
  • G. Fung, O. L. Mangasarian (Oct. 1999), “Semi supervised support vector machines for unlabeled data classification”, Technical Report, Dept. of Computer science, University of Wisconsim.
  • C. H. Lee, D. G. Shin (1999), “A multi strategy approach to classification learning in database”, Data Knowledge Engg. 31, 67-93.
  • http://www.mcw.edu/FileLibrary/Groups/Biostatistics/Publicfiles/DataFromSection/DataFromSectionTXT/ Data_from _section_1.14.txt
  • Fawcett , T (2006),”An introduction to ROC analysis”, Pattern Recognit Lett, Vol.No.27:861–874.
  • Melville, P.; Yang, S.M.; Saar-Tsechansky, M. & Mooney R (2005),”Active learning for probability estimation using Jensen-Shannon divergence”. In Proceedings of the European Conference on Machine Learning (ECML), pages 268–279. Springer.
  • Landis, J.R.; & Koch, G.G. (1977). The measurement of observer agreement for categorical data. Biometrics 33: 159–174.
Еще
Статья научная