International Journal of Advanced Computer Research (IJACR) ISSN (P): 2249-7277 ISSN (O): 2277-7970 Vol - 6, Issue - 27, November 2016
  1. 1
    Google Scholar
  2. 4
    Impact Factor
A supervised discriminant subspaces-based ensemble learning for binary classification

Hamidullah Binol, Huseyin Cukur and Abdullah Bal

Abstract

To enable feature extraction and reduction in pattern recognition applications, discriminant subspace analysis-based algorithms are used. Among the better-known discriminant subspace techniques for two-pattern recognition is the Fukunaga-Koontz transform (FKT). This technique has been modified to a non-linear version with the aid of kernel machines. This has enabled an increase in its non-linear discrimination ability, apart from securing higher statistics of data. The performance of kernel FKT (KFKT) is, however, dependent on the choice of suitable kernels and their inherent parameters. The aim of this paper is to ascertain the difficulties of ensemble learning with a finite set of base kernels on FKT subspaces. The study presents a new approach to tackling the issues of multiple kernel learning (MKL) on FKT. For this, a better kernel function is designed by either linearly or non-linearly combining numerous pre-chosen kernels into the algorithm. KFKTs were used with sub-kernel learners with a diverse set of kernels, each with different parameters. Weighted and unweighted fusions were employed in order to combine the predictions of sub-learners. The eventual results proved that the ensemble of KFKTs was far better than the single KFKTs as far as classification performance went.

Keyword

Classification, Ensemble learning, Fukunaga-Koontz transform, Multiple kernel learning.

Cite this article

Refference

[1][1]Fukunaga K, Koontz WL. Application of the Karhunen-Loeve expansion to feature selection and ordering. IEEE Transactions on Computers. 1970; 19(4):311-8.

[2][2]Yang MH, Kriegman DJ, Ahuja N. Detecting faces in images: a survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002; 24(1):34-58.

[3][3]Bal A, Alam MS. Quadratic correlation filter based target tracking in FLIR image sequences. In optics & photonics 2005. International Society for Optics and Photonics.

[4][4]Mahalanobis A, Muise RR, Stanfill SR, Van Nevel AL. Design and application of quadratic correlation filters for target detection. IEEE Transactions on Aerospace and Electronic Systems.2004; 40(3):837-50.

[5][5]Ochilov S, Alam MS, Bal A. Fukunaga-Koontz transform based dimensionality reduction for hyperspectral imagery. In defense and security symposium 2006. International Society for Optics and Photonics.

[6][6]Liu RM, Liu EQ, Yang J, Zhang TH, Wang FL. Infrared small target detection with kernel Fukunaga–Koontz transform. Measurement Science and Technology. 2007; 18(9):3025.

[7][7]Li YH, Savvides M. Kernel Fukunaga-Koontz transform subspaces for enhanced face recognition. In 2007 IEEE conference on computer vision and pattern recognition 2007 (pp. 1-8). IEEE.

[8][8]Dinç S, Bal A. Hyperspectral image classification using kernel Fukunaga-Koontz transform. Mathematical Problems in Engineering.2013:1-7.

[9][9]Binol H, Bilgin G, Dinc S, Bal A. Kernel Fukunaga–Koontz transform subspaces for classification of hyperspectral images with small sample sizes. IEEE Geoscience and Remote Sensing Letters. 2015; 12(6):1287-91.

[10][10]Binol H, Ochilov S, Alam MS, Bal A. Target oriented dimensionality reduction of hyperspectral data by kernel Fukunaga–Koontz transform. Optics and Lasers in Engineering. 2016.

[11][11]Binol H, Bal A, Cukur H. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz transform subspaces construction. In SPIE remote sensing 2015. International Society for Optics and Photonics.

[12][12]Bach FR, Lanckriet GR, Jordan MI. Multiple kernel learning, conic duality, and the SMO algorithm. In proceedings of the twenty-first international conference on machine learning 2004 (p. 6). ACM.

[13][13]Sun T, Jiao L, Liu F, Wang S, Feng J. Selective multiple kernel learning for classification with ensemble strategy. Pattern Recognition. 2013; 46(11):3081-90.

[14][14]Lichman M. UCI machine learning repository. School Information Computer Science, University of California, Irvine, CA, USA. http://archive.ics.uci.edu/ml. Accessed 10 June 2016.

[15][15]Aizerman A, Braverman EM, Rozoner LI. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote control. 1964; 25:821-37.

[16][16]Shawe-Taylor J, Cristianini N. Kernel methods for pattern analysis. Cambridge university press; 2004.

[17][17]Huo X, Elad M, Flesia AG, Muise RR, Stanfill SR, Friedman J, et al. Optimal reduced-rank quadratic classifiers using the Fukunaga-Koontz transform with applications to automated target recognition. In Aero Sense 2003 (pp. 59-72). International Society for Optics and Photonics.

[18][18]Yang J, Frangi AF, Yang JY, Zhang D, Jin Z. KPCA plus LDA: a complete kernel fisher discriminant framework for feature extraction and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005; 27(2):230-44.

[19][19]Kuncheva LI, Whitaker CJ. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning. 2003; 51(2):181-207.

[20][20]Bryll R, Gutierrez-Osuna R, Quek F. Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition. 2003; 36(6):1291-302.

[21][21]Lee WJ, Verzakov S, Duin RP. Kernel combination versus classifier combination. In international workshop on multiple classifier systems 2007 (pp. 22-31). Springer Berlin Heidelberg.

[22][22]Kim HC, Pang S, Je HM, Kim D, Bang SY. Support vector machine ensemble with bagging. In pattern recognition with support vector machines 2002 (pp. 397-408). Springer Berlin Heidelberg.

[23][23]Richards JA. Remote sensing digital image analysis. Springer; 1999.