BMC BioinformaticsBMC Bioinformatics1471-2105BioMed Central2209344732810731471-2105-12-45010.1186/1471-2105-12-450Methodology ArticleRandom KNN feature selection - a fast and stable alternative to Random ForestsLiShengqiao12shli@stat.wvu.eduHarnerE James1jharner@stat.wvu.eduAdjerohDonald A3don@csee.wvu.eduThe Department of Statistics, West Virginia University, Morgantown, WV 26506, USAHealth Effects Laboratory Division, the National Institute for Occupational Safety and Health, Morgantown, WV 26505, USAThe Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA20111811201112450450311201118112011Copyright ©2011 Li et al; licensee BioMed Central Ltd.2011Li et al; licensee BioMed Central Ltd.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Background

Successfully modeling high-dimensional data involving thousands of variables is challenging. This is especially true for gene expression profiling experiments, given the large number of genes involved and the small number of samples available. Random Forests (RF) is a popular and widely used approach to feature selection for such "small n, large p problems." However, Random Forests suffers from instability, especially in the presence of noisy and/or unbalanced inputs.

Results

We present RKNN-FS, an innovative feature selection procedure for "small n, large p problems." RKNN-FS is based on Random KNN (RKNN), a novel generalization of traditional nearest-neighbor modeling. RKNN consists of an ensemble of base k-nearest neighbor models, each constructed from a random subset of the input variables. To rank the importance of the variables, we define a criterion on the RKNN framework, using the notion of support. A two-stage backward model selection method is then developed based on this criterion. Empirical results on microarray data sets with thousands of variables and relatively few samples show that RKNN-FS is an effective feature selection approach for high-dimensional data. RKNN is similar to Random Forests in terms of classification accuracy without feature selection. However, RKNN provides much better classification accuracy than RF when each method incorporates a feature-selection step. Our results show that RKNN is significantly more stable and more robust than Random Forests for feature selection when the input data are noisy and/or unbalanced. Further, RKNN-FS is much faster than the Random Forests feature selection method (RF-FS), especially for large scale problems, involving thousands of variables and multiple classes.

Conclusions

Given the superiority of Random KNN in classification performance when compared with Random Forests, RKNN-FS's simplicity and ease of implementation, and its superiority in speed and stability, we propose RKNN-FS as a faster and more stable alternative to Random Forests in classification problems involving feature selection for high-dimensional datasets.

Background

Selection of a subset of important features (variables) is crucial for modeling high dimensional data in bioinformatics. For example, microarray gene expression data may include p ≥ 10, 000 genes. But the sample size, n, is much smaller, often less than 100. A model cannot be built directly since the model complexity is larger than the sample size. Technically, linear discriminant analysis can only fit a linear model up to n parameters. Such a model would provide a perfect fit, but it has no predictive power. This "small n, large p problem" has attracted a lot of research attention, aimed at removing nonessential or noisy features from the data, and thus determining a relatively small number of features which can mostly explain the observed data and the related biological processes.

Though much work has been done, feature selection still remains an active research area. The significant interest is attributed to its many benefits. As enumerated in [1], these include (i) reducing the complexity of computation for prediction; (ii) removing information redundancy (cost savings); (iii) avoiding the issue of overfitting; and (iv) easing interpretation. In general, the generalization error becomes lower as fewer features are included, and the higher the number of samples per feature, the better. This is sometimes referred to as the Occam's razor principle [2]. Here we give a brief summary on feature selection. For a recent review, see [3]. Basically, feature selection techniques can be grouped into three classes: Class I: Internal variable selection. This class mainly consists of Decision Trees (DT) [4], in which a variable is selected and split at each node by maximizing the purity of its descendant nodes. The variable selection process is done in the tree building process. The decision tree has the advantage of being easy to interpret, but it suffers from the instability of its hierarchical structures. Errors from ancestors pass to multiple descendant nodes and thus have an inflated effect. Even worse, a minor change in the root may change the tree structure significantly. An improved method based on decision trees is Random Forests [5], which grows a collection of trees by bootstrapping the samples and using a random selection of the variables. This approach decreases the prediction variance of a single tree. However, Random Forests may not remove certain variables, as they may appear in multiple trees. But Random Forests also provides a variable ranking mechanism that can be used to select important variables.

Class II: Variable filtering. This class encompasses a variety of filters that are principally used for the classification problem. A specific type of model may not be invoked in the filtering process. A filter is a statistic defined on a random variable over multiple populations. With the choice of a threshold, some variables can be removed. Such filters include t-statistics, F-statistics, Kullback-Leibler divergence, Fisher's discriminant ratio, mutual information [6], information-theoretic networks [7], maximum entropy [8], maximum information compression index [9], relief [10,11], correlation-based filters [12,13], relevance and redundancy analysis [14], etc.

Class III: Wrapped methods. These techniques wrap a model into a search algorithm [15,16]. This class includes foreward/backword, stepwise selection using a defined criterion, for instance, partial F-statistics, Aikaike's Information Criterion (AIC) [17], Bayesian Information Criterion (BIC) [18], etc. In [19], sequential projection pursuit (SPP) was combined with partial least square (PLS) analysis for variable selection. Wrapped feature selection based on Random Forests has also been studied [20,21]. There are two measures of importance for the variables with Random Forests, namely, mean decrease accuracy (MDA) and mean decrease Gini (MDG). Both measures are, however, biased [22]. One study shows that MDG is more robust than MDA [23]; however another study shows the contrary [24]. Our experiments show that both methods give very similar results. In this paper we present results only for MDA. The software package varSelRF in R developed in [21] will be used in this paper for comparisons. We call this method RF-FS or RF when there is no confusion. Given the hierarchical structure of the trees in the forest, stability is still a problem.

The advantage of the filter approaches is that they are simple to compute and very fast. They are good for pre-screening, rather than building the final model. Conversely, wrapped methods are suitable for building the final model, but are generally slower.

Recently, Random KNN (RKNN) which is specially designed for classification in high dimensional datasets was introduced in [25]. RKNN is a generalization of the k-nearest neighbor (KNN) algorithm [26-28]. Therefore, RKNN enjoys the many advantages of KNN. In particular, KNN is a nonparametric classification method. It does not assume any parametric form for the distribution of measured random variables. Due to the flexibility of the nonparametric model, it is usually a good classifier for many situations in which the joint distribution is unknown, or hard to model parametrically. This is especially the case for high dimensional datasets. Another important advantage of KNN is that missing values can be easily imputed [29,30]. Troyanskaya et al. [30] also showed that KNN is generally more robust and more sensitive compared with other popular classifiers. In [25] it was shown that RKNN leads to a significant performance improvement in terms of both computational complexity and classification accuracy. In this paper, we present a novel feature selection method, RKNN-FS, using the new classification and regression method, RKNN. Our empirical comparison with the Random Forests approach shows that RKNN-FS is a promising approach to feature selection for high dimensional data.

MethodsRandom KNN

The idea of Random KNN is motivated by the technique of Random Forests, and is similar in spirit to the method of random subspace selection used for Decision Forests [31]. Both Random Forests and Decision Forests [31] use decision trees as the base classifiers. Compared with the two, Random KNN uses KNN as base classifiers, with no hierarchical structure involved. Compared with decision trees, KNN is simple to implement and is stable [32]. Thus, Random KNN can be stabilized with a small number of base KNN's and hence only a small number of important variables will be needed. This implies that the final model with Random KNN will be simpler than that with Random Forests or Decision Forests. Specifically, a collection of r different KNN classifiers will be generated. Each one takes a random subset of the input variables. Since KNN is stable, bootstrapping is not necessary for KNN. Each KNN classifier classifies a test point by its majority, or weighted majority class, of its k nearest neighbors. The final classification in each case is determined by majority voting of r KNN classifications. This can be viewed as a sort of voting by a Majority of a Majority.

More formally, let F = {f1, f2,..., fp} be the p input features, and X be the n original input data vectors of length p, i.e., an n × p matrix. For a given integer m <p, denote F(m) = {fj1, fj2,..., fjm |fjl F, 1 ≤ l m} a random subset drawn from F with equiprobability.

Similarly, let X(m) be the data vectors in the subspace defined by F(m), i.e., an n × m matrix. Then a KNN(m) classifier is constructed by applying the basic KNN algorithm to the random collection of features in X(m). A collection of r such base classifiers is then combined to build the final Random KNN classifier.

Feature support - a ranking criterion

In order to select a subset of variables that have classification capability, the key is to define some criteria to rank the variables. We define a measure, called support. Each feature f will appear in some KNN classifiers, say, set C(f) of size M, where M is the multiplicity of f. In turn, each classifier c C(f) is an evaluator of its m features, say, set F(c). We can take its accuracy as a performance measure for those features. The mean accuracy of these KNN classifiers (support) is a measure of the feature relevance with the outcome. Thus we have a ranking of the features. We call this scheme bidirectional voting. Each feature randomly participates in a series of KNNs to cast a vote for classification. In turn, each classification result casts a vote for each participating feature. The algorithm is listed in Table 1. A schematic diagram of the bidirectional voting procedure is shown in Figure 1.

Computing feature supports using Random KNN bidirectional voting

/* Generate n KNN classifiers using m features and compute accuracy acc for each KNN */
/* Return support for each feature */
p ← number of features in the data set;
m ← number of features for each KNN;
r ← number of KNN classifiers;
Fi ← feature list for ith KNN classifier;
C ← build r KNNs using m feature for each;
Perform query from base data sets using each KNN;
Compare predicted values with observed values;
Calculate accuracy, acc, for each base KNN;
Fi=1rFi; {F is the list of features that appeared in r KNN classifiers};
for each f F do
C(f) ← list of KNN classifiers that used f;
support(f)1|C(f)|knnC(f)acc(knn);
end for

Bidirectional voting using Random KNN.

To compute feature supports, data are partitioned into base and query subsets. Two partition methods may be used: (1) dynamic partition: For each KNN, the cases are randomly partitioned. One half is the base subset and the other half is the query subset; (2) the data set is partitioned once, and for all KNN's, the same base subset and query subset are used. That is, all base subsets are the same and all query subsets are also the same. For diversity of KNN's, the dynamic partition is preferred.

Support is an importance measure. The higher the support, the more relevant the feature. Figure 2 shows the 30 most relevant genes determined using the support criterion for Golub's 38 leukemia training samples, for both fixed and dynamic partitions. The dataset is available in an R package golubEsets.

Supports for the first 30 most relevant genes using the Golub leukemia data (Left panel: using dynamic partition; Right panel using fixed partition of the data for testing and training).

RKNN feature selection algorithm

With feature supports, we can directly select high rank features after running the support algorithm on the entire data set. We call this direct selection. But this simple approach may be too aggressive and risky for high dimensional data. We take a more conservative and safer approach, namely, multiple rounds of screening. That is, we recursively apply the direct selection procedure. To balance between speed and classification performance, we split recursion into two stages. The first stage is fast, and the number of variables is reduced by a given ratio (1/2 by default). This stage is a geometric elimination process since the dimension to be kept is a geometric progression. In the second stage, a fixed number of features (one by default) are dropped each time. This is a linear reduction process. Finally, a relatively small set of variables will be selected for the final models. To aid in this recursive procedure, another assessment criterion for a set of features is required. We use the average accuracy of the r random KNNs. After the first stage, we can plot the average accuracies against the number of features. The iteration just before the maximum accuracy is reached is called pre-max iteration. The feature set from the pre-max iteration will be the input for the second stage selection. The algorithm is shown in Table 2.

Two-stage variable backward elimination procedure for Random KNN

Stage 1: Geometric Elimination
q ← proportion of the number features to be dropped each time;
p ← number of features in data;
niln(4p)ln(1-q); /* number of iterations, minimum dimension 4*/
initialize rknn_list[m]; /* stores feature supports for each Random KNN */
initialize acc[m]; /* stores accuracy for each Random KNN */
for i from 1 to ni do
 if i == 1 then
  rknn ← compute supports via Random KNN from all variables of data;
else
  pp(1-q);
  rknn ← compute supports via Random KNN from p top important variables of rknn;
end if
rknn list[i] ← rknn;
acc[i] ← accuracy of rknn;
end for
max= argmax1kni(acc[k]);
pre_max = max - 1;
rknn knn_list[pre_max]; /* This Random KNN goes to stage 2 */
Stage 2: Linear Reduction
d ← number features to be dropped each time;
p ← number of variables of rknn;
ni(p-4)d; /* number of iterations */
for i from 1 to ni do
if i ≠ 1 then
  p p - d;
end if
rknn ← compute supports via Random KNN from p top important variables of rknn;
acc[i] ← accuracy of rknn;
rknn_list[i] ←rknn;
end for
bestargmax1kni(acc[k]);
best_rknn rknn_list[best]; /* This gives final random KNN model */
return best_rknn;

This procedure was applied to Golub's leukemia datasets. Figure 3 shows the variation of mean accuracy with decreasing number of features in the first stage of feature selection. Figure 4 shows the variation of mean accuracy with decreasing number of features in the second stage. From Figure 4, a maximum mean accuracy is reached when 4 genes are left in the model. These final four genes selected for leukemia classification are: X95735_at, U27460_at, M27891_at and L09209_s_at. Using these four genes and the ordinary KNN classifier (k = 3) to classify the 34 independent test samples, 18 of 20 ALL cases are correctly classified and 13 of 14 AML cases are correctly classified. Total accuracy is 91%. This model is very simple compared with others that use far more genes.

Mean accuracy change with the number of features for the Golub leukemia data in the first stage.

Mean accuracy change with the number of features for the Golub leukemia data in the second stage (feature set with peak value is selected).

Time complexityTime complexity for computing feature support

For each KNN, we have the typical time complexity as follows:

• Data Partition: O(n);

• Nearest Neighbor Searching: O(k2mn log n);

• Classification: O(kn);

• Computing accuracy: O(n).

Adding the above 4 items together, we get a time needed for one KNN: O(k2mn log n). For Random KNN, we have r KNN's; thus the total time for the above steps is O(rk2mn log n). Since rm features are used in the Random KNN, the time for computing supports from these accuracies is O(rm). Thus the overall time is O(rk2mn log n) + O(rm) = O(r(m + k2mn log n)) = O(rk2mn log n). Sorting these supports will take O(p log p). Since for most applications, log p <n log n, and p <rk2m, the time complexity for computing and ranking feature supports still remains as O(rk2mn log n).

Time complexity for feature selection

In stage-one, the number of features decreases geometrically with proportion q. For simplicity, let us take m to be the square-root of p and keep r fixed. Thus the sum of the component 2m is 2p+2pq+2pq2+2pq3+2pq4+.... The first term is dominant, since q is a fraction. Thus the time complexity will be in O(rk2pnlogn).

In stage-two, each time a fixed number of features is removed. In the extreme case, only one feature is removed per iteration, the total time will be O(rk2p1+1nlogn), where p1 is the number of features at the start of stage-two, and usually p1 < p1/2. So on average, we have time in O(rk2p1+1nlogn)=O(rk2pnlogn).

Therefore, the total time for the entire algorithm will be in O(rk2pnlogn), the same as that for using Random KNN for classification, at m=p Basically, in theory, feature selection does not degrade the complexity of Random KNN. With m = log p, we obtain time complexity in O(rkpn log n). This is significant, as it means that with appropriate choice of m, we can essentially turn the exponential time complexity of feature selection to linear time, with respect to p, the number of variables.

Parameter setting

The Random KNN has three parameters, the number of nearest neighbors, k; the number of random KNNs, r; and the number of features for each base KNN, m. For "small n, large p" datasets, k should be small, such as 1 or 3, etc. (see Figure 5), since the similarities among data points are related to the nearness among them. For m, we recommend m=p in order to maximize the difference between feature subsets [25]. Performance generally improves with increasing r, however, beyond a point, larger values of r may not lead to much further improvements. (See Figure 6 for experimental results). Beyond r > 1000, there is not much added advantage with respect to classification accuracy.

The effect of k.

The effect of r, the number of KNN base classifiers.

Results and discussionMicroarray datasets

To evaluate the performance of the proposed RKNN-FS, we performed experiments on 21 microarray gene expression datasets (Table 3 and 4). Ten of them were previously used to test the performance of Random Forests in gene selection [21]. These are available at http://ligarto.org/rdiaz/Papers/rfVS/randomForestVarSel.html. The other eleven were downloaded from http://www.gems-system.org. Some datasets are from the same studies but used different preprocessing routines, and thus the dimensionalities are different. These datasets are for gene profiling of various human cancers. The number of genes range from 2,000 to 15,009. The number of classes range from 2 to 26.

Microarray gene expression datasets, Group I

DatasetSample Size, nNo. of Genes, pNo. of classes, cp/np * c/n
Ramaswamy3081500926491267
Staunton605726995859
Nutt50103674207829
Su174125331172792
NCI60615244886688
Brain4255975133666
Armstrong72112253156468
Pomeroy905920566329
Bhattacharjee20312600562310
Adenocarcinoma7698682130260
Golub725327374222
Singh102105092103206

Microarray gene expression datasets, Group II

DatasetSample Size, nNo. of Genes, pNo. of classes, cp/np * c/n
Lymphoma624026365195
Leukemia383051280161
Breast.3.Classes954869351154
SRBCT632308437147
Shipp775469271142
Breast.2.Classes774869263126
Prostate1026033259118
Khan832308428111
Colon62200023265

Classwise sample sizes are from 2 to 139 (i.e., some datasets are unbalanced). The ratio of the number of genes, p, to the sample size, n, reflects the difficulty of a dataset and is listed in the table. The number of classes c, has a similar effect on the classification problem. Thus collectively, the quantity (p/n) * c is included in the tables as another measure of complexity of the classification problem for each dataset. Based on this, we divided the datasets into two groups - Group I - those with relatively high values for (p/n) * c (corresponding to relatively more complex classification problems), and Group II - those with relatively low values (corresponding to the datasets that present relatively simpler classification problems). We have organized our results around this grouping scheme.

Evaluation methods

In this study, we compare Random KNN with Random Forests since they both are ensemble methods. The difference is the base classifier. We perform leave-one-out cross-validation (LOOCV) to obtain classification accuracies. LOOCV provides unbiased estimators of generalization error for stable classifiers such as KNN [33]. With LOOCV, we can also evaluate the effect of a single sample, i.e., the stability of a classifier. When feature selection is involved, the LOOCV is "external." In external LOOCV, feature selection is done n times separately for each set of n - 1 cases. The number of base classifiers for Random KNN and Random Forests is set to 2,000. The number of variables for each base classifier is set to the square-root of the total number of variables of the input dataset. Both k = 1 (R1NN) and k = 3 (R3NN) for Random KNN are evaluated.

Performance comparison without feature selection

Random Forests and Random KNN are applied to the two groups of datasets using all genes available. The results (data not shown) indicate that Random Forests was nominally better than Random KNN on 11 datasets while Random KNN was nominally better than Random Forests on 9 datasets. They have a tie on one dataset. Using the p-values from the McNemar test [34], Random Forests was no better than Random KNN on any of the datasets, while R1NN was significantly better than Random Forests on the NCI data and Random Forests was better than R3NN on two datasets. Using the average accuracies, no significant difference was observed in Group I (0.80 for RF, 0.81 for R1NN, 0.78 for R3NN), or in Group II (0.86 for RF, 0.84 for R1NN, 0.86 for R3NN). Therefore from the test on the 21 datasets, we may conclude that without feature selection, Random KNN is generally equivalent to Random Forests in classification performance.

Performance comparison with feature selection

The proposed feature selection approach using Random KNN is applied to the 21 datasets and compared with Random Forests. The proportion of features removed at each iteration was set to 0.2 for both RKNN-FS and RF-FS (since the second stage is kind of fine-tuning, to save time only stage one was used for comparison) and other parameter settings are the same as in the previous section. The results are shown in Tables 5 and 6. The indicated results are the mean, standard deviation, and coefficient of variation recorded based on the individual execution of the leave-one-out cross validation (LOOCV) procedure. In one case in the more complex datasets of Group I (Adenocarcinoma), RF was better than R3NN in both classification accuracy and stability, but R1NN provided a similar performance with RF in both stability and classification accuracy. In another case in Group I (Brain), RF was slightly better than RKNN-FS in classification accuracy, but much worse in stability of classification accuracy. In just one case in the simpler dataset of Group II (Prostrate), RF-FS was better than both R1NN and R3NN in both classification accuracy and stability. They had a virtual tie one one dataset (Leukemia). In all the other datasets (17 out of 21), RKNN-FS was better in both classification rate, and in stability of the classification rates. RKNN-FS showed much more significant performance improvements over RF on the more complex datasets of Group I. From the tables, one can observe the general trend: RKNN-FS performance improvement over RF increases with increasing dataset complexity (though not necessarily monotonically).

Comparative performance with gene selection, Group I

Datasetp * c/nMean AccuracyStandard DeviationCoefficient of Variation



RFR1NNR3NNRFR1NNR3NNRFR1NNR3NN
Ramaswamy12670.5770.7260.7040.0190.0130.0133.2311.7751.796
Staunton8590.5610.6920.6630.0420.0260.0317.4853.8024.669
Nutt8290.6710.9030.8340.0510.0300.0317.6193.2683.674
Su7920.8620.9010.8880.0160.0150.0141.8841.6241.567
NCI6880.8130.8540.8360.0330.0270.0234.0833.1352.796
Brain6660.9690.9580.9400.0250.0130.0182.5741.3231.875
Armstrong4680.9360.9930.9800.0200.0090.0132.1660.9381.345
Pomeroy3290.8580.9330.8630.0250.0160.0172.8921.7621.991
Bhattacharjee3100.9340.9560.9540.0150.0060.0061.5720.6200.618
Adenocarcinoma2600.9420.9390.8590.0180.0170.0321.9481.8083.675
Golub2220.9430.9860.9860.0220.0030.0042.3280.2890.369
Singh2060.8890.9520.9310.0240.0140.0182.7181.4271.920

Average0.8300.8990.8700.0260.0160.0183.3751.8142.191

Comparative performance with gene selection, Group II

Datasetp * c/nMean AccuracyStandard DeviationCoefficient of Variation



RFR1NNR3NNRFR1NNR3NNRFR1NNR3NN
Lymphoma1950.9931.0001.0000.0120.0000.0001.1620.0000.000
Leukemia1611.0000.9990.9990.0000.0060.0040.0000.5960.427
Breast.3.class1540.7780.7930.7610.0240.0370.0353.0234.6654.639
SRBCT1470.9820.9980.9960.0100.0050.0070.9670.4700.684
Shipp1420.8650.9970.9910.0330.0080.0113.7570.8001.077
Breast.2.class1260.8380.8410.8220.0240.0520.0422.8946.2065.049
Prostate1180.9470.9410.9170.0070.0110.0160.7031.1541.701
Khan1110.9850.9940.9940.0060.0060.0080.6430.6080.809
Colon650.8940.9440.9100.0100.0130.0251.1631.3372.733

Average0.9200.9450.9320.0140.0150.0161.5901.7601.902
Stability

The tables above also show the standard deviation and coefficient of variation (multiplied by 100) for the classification accuracy of RKNN-FS and RF-FS on each dataset. The tables clearly show that RKNN-FS is much more stable with respect to classification accuracy than RF-FS. As with classification accuracy itself, the improvement in stability of the accuracy rates over RF-FS also improves with increasing complexity of the dataset. Another way to measure the stability is by considering the variability in the size of the selected gene set. At each run of the LOOCV, the size of the best gene set selected by Random KNN and Random Forests for each cross-validation is recorded. The average size and standard deviation are reported in Tables 7 and 8. From these tables, one can see that for some datasets (NCI, Armstrong, Nutt, Pomeroy, Ramaswamy, Staunton and Su), the standard deviation of the best gene set size could be surprisingly large with Random Forests. The standard deviation can be larger than 1000 (Armstrong dataset, selected feature set sizes range from 3 to 7184)! The above datasets either have more classes (≥ 4 classes) and/or a large number of genes (p > 10, 000), and thus have high p * c/n values. It is also believed that datasets with a lager number genes have more noisy genes than those with a smaller number of genes from which the original investigators removed some genes somehow. This shows a striking problem with Random Forests for noisy "small n, large p" datasets: the size of the selected best gene set can change dramatically even when just one data point is changed (by LOOCV). In principle, Random Forests tries to tackle the problem of instability of the tree structure by bootstrapping the data and constructing many trees. However, the above results support the fact that Random Forests is still unstable in the presence of noisy or unbalanced input. See [22,21] for further discussion on the problem of instability in Random Forests. As Table 7 shows, in general the stability of Random KNN is much better than that of Random Forests. Clearly, such a trend will be expected to have some impact on computational requirements - with the stability of RKNN-FS in the size of the selected feature sets, there will also be less variability in its computational requirements. Thus, we recommended Random KNN over Random Forests for gene selection on microarray data.

Average gene set size and standard deviation, Group I

Datasetp * c/nMean Feature Set SizeStandard Deviation


RFR1NNR3NNRFR1NNR3NN
Ramaswamy12679073362756663452
Staunton85918574601121211
Nutt82914649498564
Su792858225216421926
NCI6881261871631184133
Brain66618137120134242
Armstrong468249767310111612
Pomeroy329698982701513
Bhattacharjee31033148146291510
Adenocarcinoma2608381142011
Golub222122721855
Singh2062625133266

Average2201181022141819

Average gene set size and standard deviation, Group II

Datasetp * c/nMean Feature Set SizeStandard Deviation


RFR1NNR3NNRFR1NNR3NN
Lymphoma19575114103304944
Leukemia1612283602218
Breast.3.Class15447433635238
SRBCT1474965645089
Shipp1421346482396
Breast.2.Class126322315291610
Prostate118163215101011
Khan11117673651114
Colon652137361855

Average305143221714
Time comparison

The computing times for RF-FS and RKNN-FS are recorded and reported in Tables 9 and 10. For the smaller (less complex) datasets (Group II), RF-FS is faster than RKNN-FS. However, as shown by time ratio in Figure 7, RKNN-FS is much faster than RF-FS on the large computationally intensive tasks. For instance, RKNN-FS is 4-5 times faster on datasets with very large p, and many classes (such as Armstrong and Staunton). We conclude that Random KNN is more scalable than Random Forests in feature selection. This is important, especially in dealing with the computational burden involved in very high dimensional datasets. Between R1NN and R3NN, there was little or no difference in execution time, although R1NN was slightly faster.

Execution time comparison, Group I

Datasetp * c/nTime (min)Ratio


RFR1NNR3NNRF/R1NNRF/R3NN
Ramaswamy126722335426243245.25.2
Staunton85933107447534.44.4
Nutt8291761951950.90.9
Su7923592128412792.82.8
NCI6881421771780.80.8
Brain666921241250.70.7
Armstrong4683273012971.11.1
Pomeroy3292963193200.90.9
Bhattacharjee3104544172517332.62.6
Adenocarcinoma2602742722731.01.0
Golub2221602242240.70.7
Singh2066465034981.31.3

Total3589410130101993.543.52

Execution time comparison, Group II

Datasetp * c/nTime (min)Ratio


RFR1NNR3NNRF/R1NNRF/R3NN
Lymphoma195571461470.40.4
Leukemia1611874740.30.2
Breast.3.Class1543103323340.90.9
SRBCT147971771780.50.5
Shipp1422382932860.80.8
Breast.2.Class1261672212220.80.8
Prostate1183703893911.00.9
Khan1117454524511.61.7
Colon65751561570.50.5

Total2077224022400.930.93

Comparison of execution time between RKNN-FS and RF-FS.

Conclusion

In this paper, we introduce RKNN-FS, a new feature selection method for the analysis of high-dimensional data, based on the novel Random KNN classifier. We performed an empirical study using the proposed RKNN-FS on 21 microarray datasets, and compared its performance with the popular Random Forests approach. From our comparative experimental results, we make the following observations: (1) The RKNN-FS method is competitive with the Random Forests feature selection method (and most times better) in classification performance; (2) Random Forests can be very unstable under some scenarios (e.g., noise in the input data, or unbalanced datasets), while the Random KNN approach shows much better stability, whether measured by stability in classification rate, or stability in size of selected gene set; (3) In terms of processing speed, Random KNN is much faster than Random Forests, especially on the most time-consuming tasks with large p and multiple classes. The concept of KNN is easier to understand than the decision tree classifier in Random Forests and is easier to implement. We have focused our analysis and comparison on Random Forests, given its popularity, and documented superiority in classification accuracy over other state-of-the-art methods [20,21]. Other results on the performance of RF and its variants are reported in [35,36]. In future work, we will perform a comprehensive comparison of the proposed RKNN-FS with these other classification and feature selection schemes, perhaps using larger and more diverse datasets, or on applications different from microarray analysis.

In summary, the RKNN-FS approach provides an effective solution to pattern analysis and modeling with high-dimensional data. In this work, supported by empirical results, we suggest the use of Random KNN as a faster and more stable alternative to Random Forests. The proposed methods have applications whenever one is faced with the "small n, large p problem", a significant challenge in the analysis of high dimensional datasets, such as in microarrays.

Authors' contributions

SL and DAA initiated this project. SL developed the methods with the help of DAA and EJH. SL and DAA analyzed the results from the proposed methods. DAA and EJH oversaw the whole project. All authors read and approved the final manuscript.

Acknowledgements

The authors are grateful to Michael Kashon for his thoughtful comments and discussion. The findings and conclusions in this report are those of the author(s) and do not necessarily represent the views of the National Institute for Occupational Safety and Health. This work was supported in part by a WV-EPSCoR RCG grant.

TheodoridisSKoutroumbasKPattern recognition2003Academic PressDudaROHartPEStorkDGPattern Classification2000New York: John Wiley & SonsSaeysYInzaILarrañagaPA review of feature selection techniques in bioinformaticsBioinformatics20072319BreimanLFriedmanJStoneCJOlshenRClassification and Regression Trees1984Chapman & Hall/CRCBreimanLRandom ForestsMachine Learning20014553210.1023/A:1010933404324Al-AniADericheMChebilJA new mutual information based measure for feature selectionIntelligent Data Analysis200374357LastMKandelAMaimonOInformation-theoretic algorithm for feature selectionPattern Recognition Letters2001226-779981110.1016/S0167-8655(01)00019-8SongGJTangSWYangDQWangTJA Spatial Feature Selection Method Based on Maximum Entropy TheoryJournal of Software200314915441550MitraPMurthyCPalSUnsupervised feature selection using feature similarityPattern Analysis and Machine Intelligence, IEEE Transactions on200224330131210.1109/34.990133KiraKRendellLAA practical approach to feature selectionML92: Proceedings of the ninth international workshop on Machine learning1992San Francisco, CA: Morgan Kaufmann Publishers Inc249256KononenkoIŠimecERobnik-ŠikonjaMOvercoming the Myopia of Inductive Learning Algorithms with RELIEFFApplied Intelligence19977395510.1023/A:1008280620621HallMASmithLAFeature Selection for Machine Learning: Comparing a Correlation-based Filter Approach to the WrapperProceedings of the Twelfth Florida International Artificial Intelligence Research Symposium Conference1999Menlo Park, CA: The AAAI Press235239WhitleyDFordMLivingstoneDUnsupervised Forward Selection: A Method for Eliminating Redundant VariablesJournal of Chemical Information and Computer Science20004051160116810.1021/ci000384cYuLLiuHEfficient Feature Selection via Analysis of Relevance and RedundancyThe Journal of Machine Learning Research2004512051224KohaviRJohnGHWrappers for feature selectionArtificial Intelligence1997971-227332410.1016/S0004-3702(97)00043-XBlumALangleyPSelection of relevant features and examples in machine learningArtificial Intelligence1997971-224527110.1016/S0004-3702(97)00063-5AkaikeHA new look at the statistical model identificationIEEE Transactions on Automatic Control197419671672310.1109/TAC.1974.1100705SchwarzGEstimating the dimension of a modelThe Annals of Statistics197862461464310.1214/aos/1176344136ZhaiHLChenXGHuZDA new approach for the identification of important variablesChemometrics and Intelligent Laboratory Systems20068013013510.1016/j.chemolab.2005.09.002LiSFedorowiczASinghHSoderholmSCApplication of the Random Forest Method in Studies of Local Lymph Node Assay Based Skin Sensitization DataJournal of Chemical Information and Modeling200545495296410.1021/ci050049u16045289Díaz-UriarteRde AndrésSAGene selection and classification of microarray data using random forestBMC Bioinformatics200673StroblCBoulesteixALZeileisAHothornTBias in random forest variable importance measures: Illustrations, sources and a solutionBMC Bioinformatics2007825CalleMLUrreaVLetter to the Editor: Stability of Random Forest importance measuresBriefings in Bioinformatics201112868910.1093/bib/bbq01120360022NicodemusKKLetter to the Editor: On the stability and ranking of predictors from random forest variable importance measuresBriefings in Bioinformatics124369373LiSRandom KNN Modeling and Variable Selection for High Dimensional DataPhD thesis2009West Virginia UniversityFixEHodgesJDiscriminatory Analysis-Nonparametric Discrimination: Consistency Properties1951Tech. Rep. 21-49-004, 4, US Air Force, School of Avaiation MedicineCoverTHartPNearest Nieghbor Pattern ClassificationIEEE Transaction on Information Theory1967IT-132127HastieTTibshiraniRFriedmanJThe Elements of Statistical Learning - Data Mining, Inference, and Prediction2001New York: Springerchap. 9, section 2CrookstonNLFinleyAOyaImpute: An R Package for kNN ImputationJournal of Statistical Software20072310116TroyanskayaOCantorMSherlockGBrownPHastieTTibshiraniRBotsteinDAltmanRBMissing value estimation methods for DNA microarraysBioinformatics200117652052510.1093/bioinformatics/17.6.52011395428HoTKThe Random Subspace Method for Constructing Decision ForestsIEEE Transactions on Pattern Analysis and Machine Intelligence199820883284410.1109/34.709601DietterichTGMachine-Learning Research: Four Current DirectionsThe AI Magazine199818497136BreimanLHeuristics of instability and stabilization in model selectionThe Annals of Statistics199624623502383McNemarQNote on the sampling error of the difference between correlated proportions or percentagesPsychometrika19471215315710.1007/BF0229599620254758SvetnikVLiawATongCCulbersonJSheridanRFeustonBRandom Forest: A Classification and Regression Tool for Compound Classification and QSAR ModelingJournal of Chemical Information and Computer Science20034361947195810.1021/ci034160gLinYJeonYRandom Forests and Adaptive Nearest NeighborsJournal of the American Statistical Association200610147457859010.1198/016214505000001230