Matrix-based Kernel Method for Large-scale Data Set

Автор: Weiya Shi

Журнал: International Journal of Image, Graphics and Signal Processing(IJIGSP) @ijigsp

Статья в выпуске: 2 vol.2, 2010 года.

Бесплатный доступ

In the computation process of many kernel methods, one of the important step is the formation of the kernel matrix. But the size of kernel matrix scales with the number of data set, it is infeasible to store and compute the kernel matrix when faced with the large-scale data set. To overcome computational and storage problem for large-scale data set, a new framework, matrix-based kernel method, is proposed. By initially dividing the large scale data set into small subsets, we could treat the autocorrelation matrix of each subset as the special computational unit. A novel polynomial-matrix kernel function is then adopted to compute the similarity between the data matrices in place of vectors. The proposed method can greatly reduce the size of kernel matrix, which makes its computation possible. The effectiveness is demonstrated by the experimental results on the artificial and real data set.

Еще

Matrix, kernel, autocorrelation, computation

Короткий адрес: https://sciup.org/15012033

IDR: 15012033

Список литературы Matrix-based Kernel Method for Large-scale Data Set

  • Y. Kirby and L. Sirovich, “Application of the Karhunen-loeve Procedure for the Characterization of Human Faces,” IEEE Trans. Pattern Anal. Mach. Intell., vol.12,no.1,pp.103--108, 1990
  • M. Turk, A. Pentland, “Eigenfaces for Recognition,” J. Cogn. Neurosci., pp.71--86, 1991
  • P. Belhumeur, J. Hespanha, and K. D.J. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence., 19:711–720, 1997
  • C. Cortes, V. Vapnik, “Support Vector Networks,” Machine. Learning. Vol.20,pp.273-297, 1995.
  • B. Scholkopf, A. J. Smola, and K.-R. Muller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Comput., Vol.10,no.5, pp.1299--1319, 1998.
  • K. Fukunaga, “Introduction to Statistical Pattern Recognition,” Academic Press, 1990.
  • M. Aizerman, E. Braverman, L. Rozonoer, “Theoretical foundations of the potential function method in pattern recognition learning,” Automat Remote Control. vol. 25,pp.821-837,1964.
  • G. Bandat and F. Anouar. Generalized discriminant analysis using a kernel method. Neural comput., 12:2385–2404, 2000
  • B. Scholkopf, S. Mika, C. J. C. Burges, P. Knirsch, K.-R. Muller, G. Ratsch, and A.J. SmolaI. S. Jacobs and C. P. Bean, “Input space versus feature space in kernel-based methods,” IEEE Trans. Neural Network.vol.10,pp.1000-1017,1999.
  • M.-H. Yang, “Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition using Kernel Methods,” in Proc. 5th IEEE Conf. Auto. Face Gesture Recognit.,pp.215-220, 2002
  • S. Mika, B. Scholkopf, A. Smola, K.R. Muller, M. Scholz, and G. Ratsch, “Kernel PCA and de-noising in Feature Spaces,” In Adv. Neural Inf. Process. Syst., 1998
  • W.M. Zheng, C.R. Zou, and L. Zhao “An Improved Algorithm for Kernel Principal Components Analysis,” Neural Processing Letters. Vol.22, pp.49--56, 2005
  • V. France, and V. Hlavac, “Greedy Algorithm for a Training Set Reduction in the Kernel Methods,” In IEEE Int. Conf. Comp. Anal. of Imags and Patterns, pp.426--433. 2003.
  • D. Achlioptas, M. McSherry, and B. Scholkopf, “Sampling techniques for kernel methods,” In Adv. Neural Inf. Process. Syst., 2002
  • C. Williams, and M. Seeger, “Using the Nystrom Method to Speed up Kernel Machine,” In Adv. Neural Inf. Process. Syst.., 2001
  • R.Rosipal, and M. Girolami, “An Expectative-maximization Approach to Nonlinear Component Analysis,” Neural Comput, vol. 13, pp.505--510, 2001
  • A. Smola and B. Scholkopf, “Sparse greedy matrix approximation for machine learning” Proceedings of the Seventeenth International Conference on Machine Learning (pp. 911–918). Standord, CA, USA., 2000.
  • I. W. Tsang, J. T. Kwok and P. Cheung,” Core Vector Machines: Fast SVM training on very large data sets” J. of Machine Learning Research, vol.6,pp.:363– 392,2005
  • K.I. Kim, M.O. Franz, and B. Scholkopf, “Iterative Kernel Principal Component Analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol.27,no.9, pp1351--1366, 2005
  • G. Cauwenberghs, and T. Poggio, “Incremental and decremental support vector machine learning,” In Advanced Neural Information Processing Systems. Cambridge, MA: MIT Press,2000
  • G.. Wu, Z. Zhang, and E. Chang, “Kronecker Factorization for Speeding up Kernel Machines,” SIAM Int. Conf. Data Mining(SDM), 2005
  • E. Osuna, R. Freund, and F. Girosi, “An improved training algorithm for support vector machines,” Proc. of the 1997 IEEE Workshop on Neural Networks for Signal Processing,pp. 276–285,1997
  • Shi Weiya, Guo Yue-Fei, and Xue Xiangyang. “Matrix-based Kernel Principal Component Analysis for Large-scale Data Set,” International Joint Conference on Neural Network, 2908-2913,2009 , USA
  • Weiya. Shi and Dexian zhang, “SupportMatrix Machine for Large-scale data set”, International Conference on Information Engineering and Computer Science, ICIECS2009
  • Scholkopf B and Smola A. Learning with kernel: Support Vector Machines,Regularization,Optimization and Beyond, MIT Press,2002
  • J. Bromley and E. Sackinger,” Neural-network and k-nearest-neighbor classifiers.” Technical Report 11359- 910819-16TM, AT&T.
Еще
Статья научная