|
|
Learning random forests for ranking |
Liangxiao JIANG( ) |
Department of Computer Science, China University of Geosciences, Wuhan 430074, China |
|
|
Abstract The random forests (RF) algorithm, which combines the predictions from an ensemble of random trees, has achieved significant improvements in terms of classification accuracy. In many real-world applications, however, ranking is often required in order to make optimal decisions. Thus, we focus our attention on the ranking performance of RF in this paper. Our experimental results based on the entire 36 UC Irvine Machine Learning Repository (UCI) data sets published on the main website of Weka platform show that RF doesn’t perform well in ranking, and is even about the same as a single C4.4 tree. This fact raises the question of whether several improvements to RF can scale up its ranking performance. To answer this question, we single out an improved random forests (IRF) algorithm. Instead of the information gain measure and the maximum-likelihood estimate, the average gain measure and the similarity-weighted estimate are used in IRF. Our experiments show that IRF significantly outperforms all the other algorithms used to compare in terms of ranking while maintains the high classification accuracy characterizing RF.
|
Keywords
random forests (RF)
decision tree
random selection
class probability estimation
ranking
the area under the receiver operating characteristics curve (AUC)
|
Corresponding Author(s):
JIANG Liangxiao,Email:ljiang@cug.edu.cn
|
Issue Date: 05 March 2011
|
|
1 |
Provost F, Domingos P. Tree induction for probability-based ranking. Machine Learning , 2003, 52(3): 199-215 doi: 10.1023/A:1024099825458
|
2 |
Ling C X, Yan R J. Decision tree with better ranking. In: Proceedings of 20th International Conference on Machine Learning . 2003, 480-487
|
3 |
Jiang L X, Li C QCaiZ H. Learning decision tree for ranking. Knowledge and Information Systems , 2009, 20(1): 123-135 doi: 10.1007/s10115-008-0173-z
|
4 |
Jiang L X, Wang D H, Zhang H, Cai Z H, Huang B. Using instance cloning to improve naive Bayes for ranking. International Journal of Pattern Recognition and Artificial Intelligence , 2008, 22(6): 1121-1140 doi: 10.1142/S0218001408006703
|
5 |
Bradley A P. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition , 1997, 30(7): 1145-1159 doi: 10.1016/S0031-3203(96)00142-2
|
6 |
Hand D J, Till R J. A simple generalisation of the area under the roc curve for multiple class classification problems. Machine Learning , 2001, 45(2): 171-186 doi: 10.1023/A:1010920819831
|
7 |
Ling C X, Huang J, Zhang H. Auc: a statistically consistent and more discriminating measure than accuracy. In: Proceedings of 18th International Joint Conference on Artificial Intelligence . 2003, 519-526
|
8 |
Quinlan J R. C4.5: Programs for Machine Learning. San Francisco: Morgan Kaufmann, 1992
|
9 |
Mitchell T M. Machine Learning. New York: McGraw-Hill, 1997
|
10 |
Breiman L. Random forests. Machine Learning , 2001, 45(1): 5-32 doi: 10.1023/A:1010933404324
|
11 |
Dietterich T G. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning , 2000, 40(2): 139-157 doi: 10.1023/A:1007607513941
|
12 |
Breiman L. Bagging Predictors. Machine Learning , 1996, 24(2): 123-140 doi: 10.1007/BF00058655
|
13 |
Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: bagging, boosting and variants. Machine Learning , 1999, 36(1-2): 105-139 doi: 10.1023/A:1007515423169
|
14 |
Witten I H, Frank E. Data Mining: Practical Machine Learning Tools and Techniques. 2nd ed. San Francisco: Morgan Kaufmann, 2005
|
15 |
Quinlan J R. Induction of decision trees. Machine Learning , 1986, 1(1): 81-106 doi: 10.1007/BF00116251
|
16 |
Wang D H, Jiang L X. An improved attribute selection measure for decision tree induction. In: Proceedings of 4th International Conference on Fuzzy Systems and Knowledge Discovery . 2007, 654-658 doi: 10.1109/FSKD.2007.161
|
17 |
De Mántaras R L. A distance-based attribute selection measure for decision tree induction. Machine Learning , 1991, 6(1): 81-92 doi: 10.1007/BF00153761
|
18 |
Pazzani M J, Merz C J, Murphy P M, Ali K. Hume T, Brunk C. Reducing misclassification costs. In: Proceedings of 11th International Conference on Machine Learning . 1994, 217-225
|
19 |
Bradford J P, Kunz C, Kohavi R, Brunk C, Brodley C E. Pruning decision trees with misclassification costs. In: Proceedings of 10th European Conference on Machine Learning . 1998, 131-136
|
20 |
Provost F J, Fawcett T, Kohavi R. The case against accuracy estimation for comparing induction algorithms. In: Proceedings of 15th International Conference on Machine Learning . 1998, 445-453
|
21 |
Jiang L X, Wang D H, Cai Z H. Scaling up the accuracy of bayesian network classifiers by m-estimate. In: Proceedings of 3rd International Conference on Intelligent Computing . 2007, 475-484
|
22 |
Smyth P, Gray A, Fayyad U M. Retrofitting decision tree classifiers using kernel density estimation. In: Proceedings of 12th International Conference on Machine Learning . 1995, 506-514
|
23 |
Nadeau C, Bengio Y. Inference for the generalization error. Machine Learning , 2003, 52(3): 239-281 doi: 10.1023/A:1024068626366
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|