1. School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China 2. Department of Computer Science, City University of Hong Kong, Hong Kong SAR 999077, China
Despite significant successes achieved in knowledge discovery, traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data, such as imbalanced, high-dimensional, noisy data, etc. The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data. In this context, it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model. Ensemble learning, as one research hot spot, aims to integrate data fusion, data modeling, and data mining into a unified framework. Specifically, ensemble learning firstly extracts a set of features with a variety of transformations. Based on these learned features, multiple learning algorithms are utilized to produce weak predictive results. Finally, ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way. In this paper, we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics. In addition, we present challenges and possible research directions for each mainstream approach of ensemble learning, and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning, reinforcement learning, etc.
B V Dasarathy, B V Sheela. A composite classifier system design: concepts and methodology. Proceedings of the IEEE, 1979, 67(5): 708–713 https://doi.org/10.1109/PROC.1979.11321
3
M Kearns. Learning boolean formulae or finite automata is as hard as factoring. Technical Report TR-14-88 Harvard University Aikem Computation Laboratory, 1988
N Garcia-Pedrajas. Constructing ensembles of classifiers by means of weighted instance selection. IEEE Transactions on Neural Networks, 2009, 20(2): 258–277 https://doi.org/10.1109/TNN.2008.2005496
11
N Garcia-Pedrajas, J Maudes-Raedo, C Garcia-Osorio, J J Rodriguez-Díez, D E Linden, S J Johnston. Supervised subspace projections for constructing ensembles of classifiers. Information Sciences, 2012, 193(11): 1–21 https://doi.org/10.1016/j.ins.2011.06.023
12
L I Kuncheva, J J Rodriguez, C O Plumpton, D E Linden, S J Johnston. Random subspace ensembles for FMRI classification. IEEE Transactions on Medical Imaging, 2010, 29(2): 531–542 https://doi.org/10.1109/TMI.2009.2037756
13
Y Ye, Q Wu, J Z Huang, M K Ng, X Li. Stratified sampling for feature subspace selection in random forests for high dimensional data. Pattern Recognition, 2013, 46(3): 769–787 https://doi.org/10.1016/j.patcog.2012.09.005
14
R Bryll, R Gutierrez-Osuna, F Quek. Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognition, 2003, 36(6): 1291–1302 https://doi.org/10.1016/S0031-3203(02)00121-8
15
A Blum, T Mitchell. Combining labeled and unlabeled data with cotraining. In: Proceedings of the 11th Annual Conference on Computational Learning Theory. 1998, 92–100 https://doi.org/10.1145/279943.279962
16
J Wang, S W Luo, X H Zeng. A random subspace method for cotraining. In: Proceedings of 2008 IEEE International Joint Conference on Neural Networks. 2008, 195–200
J Zhang, D Zhang. A novel ensemble construction method for multiview data using random cross-view correlation between within-class examples. Pattern Recognition, 2011, 44(6): 1162–1171 https://doi.org/10.1016/j.patcog.2010.12.011
19
Y Guo, L Jiao, S Wang, F Liu, K Rong, T Xiong. A novel dynamic rough subspace based selective ensemble. Pattern Recognition, 2015, 48(5): 1638–1652 https://doi.org/10.1016/j.patcog.2014.11.001
20
T Windeatt, R Duangsoithong, R Smith. Embedded feature ranking for ensemble MLP classifiers. IEEE Transactions on Neural Networks, 2011, 22(6): 988–994 https://doi.org/10.1109/TNN.2011.2138158
21
J J Rodriguez, L I Kuncheva, C J Alonso. Rotation forest: a new classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(10): 1619–1630 https://doi.org/10.1109/TPAMI.2006.211
22
A Takemura, A Shimizu, K Hamamoto. Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on the AdaBoost algorithm with feature selection. IEEE Transactions on Medical Imaging, 2010, 29(3): 598–609 https://doi.org/10.1109/TMI.2009.2022630
23
M F Amasyali, O K Ersoy. Classifier ensembles with the extended space forest. IEEE Transactions on Knowledge and Data Engineering, 2013, 26(3): 549–562 https://doi.org/10.1109/TKDE.2013.9
24
R Polikar, J Depasquale, H S Mohammed, G Brown, L I Kuncheva. Learn++.MF: a random subspace approach for the missing feature problem. Pattern Recognition, 2010, 43(11): 3817–3832 https://doi.org/10.1016/j.patcog.2010.05.028
25
L Nanni, A Lumini. Evolved feature weighting for random subspace classifier. IEEE Transactions on Neural Networks, 2008, 19(2): 363–366 https://doi.org/10.1109/TNN.2007.910737
26
J Kennedy, R C Eberhart. A discrete binary version of the particle swarm optimization algorithm. Computational Cybernatics and Simulation, 1997, 5(1): 4104–4108
27
Z H Zhou, W Tang. Selective ensemble of decision trees. In: Proceedings of International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing. 2003, 476–483 https://doi.org/10.1007/3-540-39205-X_81
28
R Diao, F Chao, T Peng, N Snooke, Q Shen. Feature selection inspired classifier ensemble reduction. IEEE Transactions on Cybernetics, 2014, 44(8): 1259–1268 https://doi.org/10.1109/TCYB.2013.2281820
29
Z Yu, D Wang, J You, H S Wong, S Wu, J Zhang, G Han. Progressive subspace ensemble learning. Pattern Recognition, 2016, 60: 692–705 https://doi.org/10.1016/j.patcog.2016.06.017
30
Z Yu, D Wang, Z Zhao, C P Chen, J You, H S Wong, J Zhang. Hybrid incremental ensemble learning for noisy real-world data classification. IEEE Transactions on Cybernetics, 2017, 99: 1–14
31
E M Dos Santos, R Sabourin, P Maupin. A dynamic overproduceand- choose strategy for the selection of classifier ensembles. Pattern Recognition, 2008, 41(10): 2993–3009 https://doi.org/10.1016/j.patcog.2008.03.027
32
D Hernández-Lobato, G Martínez-Muñoz, A Suárez. Statistical instance-based pruning in ensembles of independent classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 364–369 https://doi.org/10.1109/TPAMI.2008.204
33
G Martínez-Muñoz, D Hernández-Lobato, A Suárez. An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 245–259 https://doi.org/10.1109/TPAMI.2008.78
34
C De Stefano, G Folino, F Fontanella, A S Di Freca. Using bayesian networks for selecting classifiers in GP ensembles. Information Sciences, 2014, 258: 200–216 https://doi.org/10.1016/j.ins.2013.09.049
35
A Rahman, B Verma. Novel layered clustering-based approach for generating ensemble of classifierntis. IEEE Transactions on Neural Networks, 2011, 22(5): 781–792 https://doi.org/10.1109/TNN.2011.2118765
36
B Verma, A Rahman. Cluster-oriented ensemble classifier: impact of multicluster characterization on ensemble classifier learning. IEEE Transactions on Knowledge and Data Engineering, 2012, 24(4): 605–618 https://doi.org/10.1109/TKDE.2011.28
37
L Zhang, P N Suganthan. Oblique decision tree ensemble via multisurface proximal support vector machine. IEEE Transactions on Cybernetics, 2015, 45(10): 2165–2176 https://doi.org/10.1109/TCYB.2014.2366468
38
P J Tan, D L Dowe. Decision forests with oblique decision trees. In: Proceedings of Mexican International Conference on Artificial Intelligence. 2006, 593–603 https://doi.org/10.1007/11925231_56
Z Yu, H Chen, J Liu, J You, H Leung, G Han. Hybrid k-nearest neighbor classifier. IEEE Transactions on Cybernetics, 2016, 46(6): 1263–1275 https://doi.org/10.1109/TCYB.2015.2443857
D Hernández-Lobato, G Martínez-Muñoz, A Suárez. How large should ensembles of classifiers be? Pattern Recognition, 2013, 46(5): 1323–1336 https://doi.org/10.1016/j.patcog.2012.10.021
43
X Z Wang, H J Xing, Y Li, Q Hua, C R Dong, W Pedrycz. A study on relationship between generalization abilities and fuzziness of base classifiers in ensemble learning. IEEE Transactions on Fuzzy Systems, 2015, 23(5): 1638–1654 https://doi.org/10.1109/TFUZZ.2014.2371479
44
L I Kuncheva. A bound on kappa-error diagrams for analysis of classifier ensembles. IEEE Transactions on Knowledge and Data Engineering, 2013, 25(3): 494–501 https://doi.org/10.1109/TKDE.2011.234
45
W Gao, Z H Zhou. Approximation stability and boosting. In: Proceedings of International Conference on Algorithmic Learning Theory. 2010, 59–73 https://doi.org/10.1007/978-3-642-16108-7_9
46
X C Yin, K Huang, H W Hao, K Iqbal, Z B Wang. A novel classifier ensemble method with sparsity and diversity. Neurocomputing, 2014, 134: 214–221 https://doi.org/10.1016/j.neucom.2013.07.054
N Li, Y Yu, Z H Zhou. Diversity regularized ensemble pruning. In: Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases. 2012, 330–345 https://doi.org/10.1007/978-3-642-33460-3_27
49
D Zhang, S Chen, Z H Zhou, Q Yang. Constraint projections for ensemble learning. In: Proceedings of the 23rd National Conference on Artifical Intelligence-Volume 2. 2008, 758–763
50
Z H Zhou, N Li. Multi-information ensemble diversity. In: Proceedings of International Workshop on Multiple Classifier Systems. 2010, 134–144 https://doi.org/10.1007/978-3-642-12127-2_14
51
T Sun, Z H Zhou. Structural diversity for decision tree ensemble learning. Frontiers of Computer Science, 2018, 12(3): 560–570 https://doi.org/10.1007/s11704-018-7151-8
52
S Mao, L Jiao, L Xiong, S Gou, B Chen, S K Yeung. Weighted classifier ensemble based on quadratic form. Pattern Recognition, 2015, 48(5): 1688–1706 https://doi.org/10.1016/j.patcog.2014.10.017
53
Z Yu, Z Wang, J You, J Zhang, J Liu, H S Wong, G Han. A new kind of nonparametric test for statistical comparison of multiple classifiers over multiple datasets. IEEE Transactions on Cybernetics, 2017, 47(12): 4418–4431 https://doi.org/10.1109/TCYB.2016.2611020
54
K J Kim, S B Cho. An evolutionary algorithm approach to optimal ensemble classifiers for DNA microarray data analysis. IEEE Transactions on Evolutionary Computation, 2008, 12(3): 377–388 https://doi.org/10.1109/TEVC.2007.906660
55
C Qian, Y Yu, Z H Zhou. Pareto ensemble pruning. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2015
56
Z H Zhou, J Feng. Deep forest: towards an alternative to deep neural networks. 2017, arXiv preprint arXiv:1702.08835 https://doi.org/10.24963/ijcai.2017/497
57
J Feng, Z H Zhou. AutoEncoder by forest. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018
58
Y L Zhang, J Zhou, W Zheng, J Feng, L Li, Z Liu, Z H Zhou. Distributed deep forest and its application to automatic detection of cashout fraud. 2018, arXiv preprint arXiv:1805.04234
59
J Feng, Y Yu, Z H Zhou. Multi-layered gradient boosting decision trees. In: Proceedings of Advances in Neural Information Processing Systems. 2018, 3555–3565
60
M Pang, K M Ting, P Zhao, Z H Zhou. Improving deep forest by confidence screening. In: Proceedings of the 18th IEEE International Conference on Data Mining. 2018, 1194–1199 https://doi.org/10.1109/ICDM.2018.00158
Z H Zhou, M L Zhang. Solving multi-instance problems with classifier ensemble based on constructive clustering. Knowledge and Information Systems, 2007, 11(2): 155–170 https://doi.org/10.1007/s10115-006-0029-3
63
X Zhu, P Zhang, X Lin, Y Shi. Active learning from stream data using optimal weight classifier ensemble. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2010, 40(6): 1607–1621 https://doi.org/10.1109/TSMCB.2010.2042445
64
D Brzezinski, J Stefanowski. Reacting to different types of concept drift: the accuracy updated ensemble algorithm. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(1): 81–94 https://doi.org/10.1109/TNNLS.2013.2251352
65
M D Muhlbaier, A Topalis, R Polikar. Learn++.NC: combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes. IEEE Transactions on Neural Networks, 2009, 20(1): 152–168 https://doi.org/10.1109/TNN.2008.2008326
66
J Xiao, C He, X Jiang, D Liu. A dynamic classifier ensemble selection approach for noise data. Information Sciences, 2010, 180(18): 3402–3421 https://doi.org/10.1016/j.ins.2010.05.021
67
M Galar, A Fernandez, E Barrenechea, H Bustince, F Herrera. A review on ensembles for the class imbalance problem: bagging, boosting, and hybrid-based approaches. IEEE Transactions on Systems Man and Cybernetics Part C, 2012, 42(4): 463–484 https://doi.org/10.1109/TSMCC.2011.2161285
68
X Y Liu, J Wu, Z H Zhou. Exploratory under-sampling for classimbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2009, 39(2): 539–550 https://doi.org/10.1109/TSMCB.2008.2007853
69
B Sun, H Chen, J Wang, H Xie. Evolutionary under-sampling based bagging ensemble method for imbalanced data classification. Frontiers of Computer Science, 2018, 12(2): 331–350 https://doi.org/10.1007/s11704-016-5306-z
70
Q Li, G Li, W Niu, Y Cao, L Chang, J Tan, L Guo. Boosting imbalanced data learning with wiener process oversampling. Frontiers of Computer Science, 2017, 11(5): 836–851 https://doi.org/10.1007/s11704-016-5250-y
71
J H Abawajy, A Kelarev, M Chowdhury. Large iterative multitier ensemble classifiers for security of big data. IEEE Transactions on Emerging Topics in Computing, 2014, 2(3): 352–363 https://doi.org/10.1109/TETC.2014.2316510
72
N Li, Z H Zhou. Selective ensemble of classifier chains. In: Proceedings of International Workshop on Multiple Classifier Systems. 2013, 146–156 https://doi.org/10.1007/978-3-642-38067-9_13
73
N Li, Y Jiang, Z H Zhou. Multi-label selective ensemble. In: Proceedings of International Workshop on Multiple Classifier Systems. 2015, 76–88 https://doi.org/10.1007/978-3-319-20248-8_7
74
Z Yu, Z Deng, H S Wong, L Tan. Identifying protein-kinase-specific phosphorylation sites based on the Bagging-AdaBoost ensemble approach. IEEE Transactions on Nanobioscience, 2010, 9(2): 132–143 https://doi.org/10.1109/TNB.2010.2043682
75
D J Yu, J Hu, J Yang, H B Shen, J Tang, J Y Yang. Designing templatefree predictor for targeting protein-ligand binding sites with classifier ensemble and spatial clustering. IEEE/ACMTransactions on Computational Biology and Bioinformatics, 2013, 10(4): 994–1008 https://doi.org/10.1109/TCBB.2013.104
76
G Yu, H Rangwala, C Domeniconi, G Zhang, Z Yu. Protein function prediction using multilabel ensemble classification. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2013, 10(4): 1 https://doi.org/10.1109/TCBB.2013.111
77
M R Daliri. Combining extreme learning machines using support vector machines for breast tissue classification. Computer Methods in Biomechanics and Biomedical Engineering, 2015, 18(2): 185–191 https://doi.org/10.1080/10255842.2013.789100
78
L Oliveira, U Nunes, P Peixoto. On exploration of classifier ensemble synergism in pedestrian detection. IEEE Transactions on Intelligent Transportation Systems, 2010, 11(1): 16–27 https://doi.org/10.1109/TITS.2009.2026447
79
Y Xu, X Cao, H Qiao. An efficient tree classifier ensemble-based approach for pedestrian detection. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2011, 41(1): 107–117 https://doi.org/10.1109/TSMCB.2010.2046890
80
B Zhang. Reliable classification of vehicle types based on cascade classifier ensembles. IEEE Transactions on Intelligent Transportation Systems, 2013, 14(1): 322–332 https://doi.org/10.1109/TITS.2012.2213814
81
S Sun, C Zhang. The selective random subspace predictor for traffic flow forecasting. IEEE Transactions on Intelligent Transportation Systems, 2007, 8(2): 367–373 https://doi.org/10.1109/TITS.2006.888603
82
Y Su, S Shan, X Chen, W Gao. Hierarchical ensemble of global and local classifiers for face recognition. IEEE Transactions on Image Processing, 2009, 18(8): 1885–1896 https://doi.org/10.1109/TIP.2009.2021737
83
P Zhang, T D Bui, C Y Suen. A novel cascade ensemble classifier system with a high recognition performance on handwritten digits. Pattern Recognition, 2007, 40(12): 3415–3429 https://doi.org/10.1016/j.patcog.2007.03.022
84
X S Xu, X Xue, Z H Zhou. Ensemble multi-instance multi-label learning approach for video annotation task. In: Proceedings of the 19th ACM International Conference on Multimedia. 2011, 1153–1156 https://doi.org/10.1145/2072298.2071962
85
V Hautamaki, T Kinnunen, F Sedlák, K A Lee, B Ma, H Li. Sparse classifier fusion for speaker verification. IEEE Transactions on Audio Speech and Language Processing, 2013, 21(8): 1622–1631 https://doi.org/10.1109/TASL.2013.2256895
86
Y Guan, C T Li, F Roli. On reducing the effect of covariate factors in gait recognition: a classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(7): 1521–1528 https://doi.org/10.1109/TPAMI.2014.2366766
87
D Tao, X Tang, X Li, X Wu. Asymmetric bagging and random subspace for support vector machines-based relevance feedback in image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(7): 1088–1099 https://doi.org/10.1109/TPAMI.2006.134
88
W Hu, W Hu, S Maybank. AdaBoost-based algorithm for network intrusion detection. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2008, 38(2): 577–583 https://doi.org/10.1109/TSMCB.2007.914695
89
P Zhang, X Zhu, Y Shi, L Guo, X Wu. Robust ensemble learning for mining noisy data streams. Decision Support Systems, 2011, 50(2): 469–479 https://doi.org/10.1016/j.dss.2010.11.004
90
L Yu, S Wang, K K Lai. Developing an SVM-based ensemble learning system for customer risk identification collaborating with customer relationship management. Frontiers of Computer Science, 2010, 4(2): 196–203 https://doi.org/10.1007/s11704-010-0508-2
91
E Fersini, E Messina, F A Pozzi. Sentiment analysis: Bayesian ensemble learning. Decision Support Systems, 2014, 68: 26–38 https://doi.org/10.1016/j.dss.2014.10.004
92
G Yu, G Zhang, Z Yu, C Domeniconi, J You, G Han. Semi-supervised ensemble classification in subspaces. Applied Soft Computing, 2012, 12(5): 1511–1522 https://doi.org/10.1016/j.asoc.2011.12.019
93
Z Yu, Y Zhang, C L P Chen, J You, H S Wong, D Dai, S Wu, J Zhang. Multiobjective semisupervised classifier ensemble. IEEE Transactions on Cybernetics, 2019, 49(6): 2280–2293 https://doi.org/10.1109/TCYB.2018.2824299
94
O Gharroudi, H Elghazel, A Aussem. A semi-supervised ensemble approach for multi-label learning. In: Proceedings of the 16th IEEE International Conference on Data Mining Workshops (ICDMW). 2016, 1197–1204 https://doi.org/10.1109/ICDMW.2016.0173
95
X Lu, J Zhang, T Li, Y Zhang. Hyperspectral image classification based on semi-supervised rotation forest. Remote Sensing, 2017, 9(9): 924 https://doi.org/10.3390/rs9090924
96
S Wang, K Chen. Ensemble learning with active data selection for semi-supervised pattern classification. In: Proceedings of 2007 International Joint Conference on Neural Networks. 2007, 355–360 https://doi.org/10.1109/IJCNN.2007.4370982
97
R G F Soares, H Chen, X Yao. A cluster-based semi-supervised ensemble for multiclass classification. IEEE Transactions on Emerging Topics in Computational Intelligence, 2017, 1(6): 408–420 https://doi.org/10.1109/TETCI.2017.2743219
98
H Woo, C H Park. Semi-supervised ensemble learning using label propagation. In: Proceedings of the 12th IEEE International Conference on Computer and Information Technology. 2012, 421–426 https://doi.org/10.1109/CIT.2012.98
99
M L Zhang, Z H Zhou. Exploiting unlabeled data to enhance ensemble diversity. Data Mining and Knowledge Discovery, 2013, 26(1): 98–129 https://doi.org/10.1007/s10618-011-0243-9
100
M Alves, A L C Bazzan, M Recamonde-Mendoza. Social-training: ensemble learning with voting aggregation for semi-supervised classification tasks. In: Proceedings of 2017 Brazilian Conference on Intelligent Systems (BRACIS). 2017, 7–12 https://doi.org/10.1109/BRACIS.2017.42
101
Z Yu, Y Lu, J Zhang, J You, H S Wong, Y Wang, G Han. Progressive semi-supervised learning of multiple classifiers. IEEE Transactions on Cybernetics, 2018, 48(2): 689–702 https://doi.org/10.1109/TCYB.2017.2651114
102
M J Hosseini, A Gholipour, H Beigy. An ensemble of cluster-based classifiers for semi-supervised classification of non-stationary data streams. Knowledge and Information Systems, 2016, 46(3): 567–597 https://doi.org/10.1007/s10115-015-0837-4
103
Y Wang, T Li. Improving semi-supervised co-forest algorithm in evolving data streams. Applied Intelligence, 2018, 48(10): 3248–3262 https://doi.org/10.1007/s10489-018-1149-7
104
Z Yu, Y Zhang, J You, C P Chen, H S Wong, G Han, J Zhang. Adaptive semi-supervised classifier ensemble for high dimensional data classification. IEEE Transactions on Cybernetics, 2019, 49(2): 366–379 https://doi.org/10.1109/TCYB.2017.2761908
105
M Li, Z H Zhou. Improve computer-aided diagnosis with machine learning techniques using undiagnosed samples. IEEE Transactions on Systems, Man, and Cybernetics- Part A: Systems and Humans, 2007, 37(6): 1088–1098 https://doi.org/10.1109/TSMCA.2007.904745
106
U Guz, S Cuendet, D Hakkani-Tur, G Tur. Multi-view semisupervised learning for dialog act segmentation of speech. IEEE Transactions on Audio Speech and Language Processing, 2010, 18(2): 320–329 https://doi.org/10.1109/TASL.2009.2028371
107
L Shi, X Ma, L Xi, Q Duan, J Zhao. Rough set and ensemble learning based semi-supervised algorithm for text classification. Expert Systems with Applications, 2011, 38(5): 6300–6306 https://doi.org/10.1016/j.eswa.2010.11.069
108
T S Abdelgayed, W G Morsi, T S Sidhu. Fault detection and classification based on co-training of semi-supervised machine learning. IEEE Transactions on Industrial Electronics, 2018, 65(2): 1595–1605 https://doi.org/10.1109/TIE.2017.2726961
109
S Saydali, H Parvin, A A Safaei. Classifier ensemble by semisupervised learning: local aggregation methodology. In: Proceedings of International Doctoral Workshop on Mathematical and Engineering Methods in Computer Science. 2015, 119–132 https://doi.org/10.1007/978-3-319-29817-7_11
110
W Shao, X Tian. Semi-supervised selective ensemble learning based on distance to model for nonlinear soft sensor development. Neurocomputing, 2017, 222: 91–104 https://doi.org/10.1016/j.neucom.2016.10.005
111
I Ahmed, R Ali, D Guan, Y K Lee, S Lee, T Chung. Semi-supervised learning using frequent itemset and ensemble learning for SMS classification. Expert Systems with Applications, 2015, 42(3): 1065–1073 https://doi.org/10.1016/j.eswa.2014.08.054
112
A Strehl, J Ghosh. Cluster ensembles: a knowledge reuse framework for combining partitionings. Journal of Machine Learning Research, 2002, 3(3): 583–617
113
F Yang, X Li, Q Li, T Li. Exploring the diversity in cluster ensemble generation: random sampling and random projection. Expert Systems with Applications, 2014, 41(10): 4844–4866 https://doi.org/10.1016/j.eswa.2014.01.028
114
O Wu, W Hu, S J Maybank, M Zhu, B Li. Efficient clustering aggregation based on data fragments. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2012, 42(3): 913–926 https://doi.org/10.1109/TSMCB.2012.2183591
115
L Franek, X Jiang. Ensemble clustering by means of clustering embedding in vector spaces. Pattern Recognition, 2014, 47(2): 833–842 https://doi.org/10.1016/j.patcog.2013.08.019
116
Z Yu, H S Wong, H Wang. Graph-based consensus clustering for class discovery from gene expression data. Bioinformatics, 2007, 23(21): 2888–2896 https://doi.org/10.1093/bioinformatics/btm463
117
Z Yu, H S Wong, J You, G Yu, G Han. Hybrid cluster ensemble framework based on the random combination of data transformation operators. Pattern Recognition, 2012, 45(5): 1826–1837 https://doi.org/10.1016/j.patcog.2011.11.016
118
Z Yu, L Li, J You, H S Wong, G Han. SC3: triple spectral clusteringbased consensus clustering framework for class discovery from cancer gene expression profiles. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2012, 9(6): 1751–1765 https://doi.org/10.1109/TCBB.2012.108
119
Z Yu, H Chen, J You, G Han, L Li. Hybrid fuzzy cluster ensemble framework for tumor clustering from biomolecular data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2013, 10(3): 657–670 https://doi.org/10.1109/TCBB.2013.59
120
Z Yu, L Li, J Liu, J Zhang, G Han. Adaptive noise immune cluster ensemble using affinity propagation. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(12): 3176–3189 https://doi.org/10.1109/TKDE.2015.2453162
A L N Fred, A K Jain. Combining multiple clusterings using evidence accumulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(6): 835–850 https://doi.org/10.1109/TPAMI.2005.113
124
A Lourenco, A L N Fred, A K Jain. On the scalability of evidence accumulation clustering. In: Proceedings of the 20th International Conference on Pattern Recognition. 2010, 782–785 https://doi.org/10.1109/ICPR.2010.197
125
M F Amasyali, O Ersoy. The performance factors of clustering ensembles. In: Proceedings of the 16th IEEE Signal Processing, Communication and Applications Conference. 2008, 1–4 https://doi.org/10.1109/SIU.2008.4632587
126
X Z Fern, C E Brodley. Random projection for high dimensional data clustering: a cluster ensemble approach. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03). 2003, 186–193
127
L I Kuncheva, C J Whitaker. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 2003, 51(2): 181–207 https://doi.org/10.1023/A:1022859003006
128
L I Kuncheva, D P Vetrov. Evaluation of stability of k-means cluster ensembles with respect to random initialization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(11): 1798–1808 https://doi.org/10.1109/TPAMI.2006.226
129
Y Shi, Z Yu, C L P Chen, J You, H S Wong, Y D Wang, J Zhang. Transfer clustering ensemble selection. IEEE Transactions on Cybernetics, 2018, PP(99): 1–14 https://doi.org/10.1109/TCYB.2018.2885585
130
A P Topchy, M H C Law, A K Jain, A L Fred. Analysis of consensus partition in cluster ensemble. In: Proceedings of the 4th IEEE International Conference on Data Mining (ICDM’04). 2004, 225–232
131
T Wang. CA-tree: a hierarchical structure for efficient and scalable coassociation-based cluster ensembles. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2011, 41(3): 686–698 https://doi.org/10.1109/TSMCB.2010.2086059
X Z Fern, W Lin. Cluster ensemble selection. Statistical Analysis and Data Mining: The ASA Data Science Journal, 2008, 1(3): 128–141 https://doi.org/10.1002/sam.10008
134
J Azimi, X Fern. Adaptive cluster ensemble selection. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence. 2009, 992–997
135
X Wang, D Han, C Han. Rough set based cluster ensemble selection. In: Proceedings of the 16th International Conference on Information Fusion. 2013, 438–444
136
Z Yu, L Li, Y Gao, J You, J Liu, H S Wong, G Han. Hybrid clustering solution selection strategy. Pattern Recognition, 2014, 47(10): 3362–3375 https://doi.org/10.1016/j.patcog.2014.04.005
Z Yu, L Li, H S Wong, J You, G Han, Y Gao, G Yu. Probabilistic cluster structure ensemble. Information Sciences, 2014, 267(5): 16–34 https://doi.org/10.1016/j.ins.2014.01.030
139
Z Yu, X Zhu, H S Wong, J You, J Zhang, G Han. Distribution-based cluster structure selection. IEEE Transactions on Cybernetics, 2017, 47(11): 3554–3567 https://doi.org/10.1109/TCYB.2016.2569529
Y Yang, K Chen. Temporal data clustering via weighted clustering ensemble with different representations. IEEE Transactions on Knowledge and Data Engineering, 2010, 23(2): 307–320 https://doi.org/10.1109/TKDE.2010.112
142
Z Yu, H S Wong. Class discovery from gene expression data based on perturbation and cluster ensemble. IEEE Transactions on Nanobioscience, 2009, 8(2): 147–160 https://doi.org/10.1109/TNB.2009.2023321
143
Z Yu, H Chen, J You, J Liu, H S Wong, G Han, L Li. Adaptive fuzzy consensus clustering framework for clustering analysis of cancer data. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2015, 12(4): 887–901 https://doi.org/10.1109/TCBB.2014.2359433
144
R Avogadri, G Valentini. Fuzzy ensemble clustering based on random projections for DNA microarray data analysis. Artificial Intelligence in Medicine, 2009, 45(2): 173–183 https://doi.org/10.1016/j.artmed.2008.07.014
145
S Mimaroglu, E Aksehirli. DICLENS: divisive clustering ensemble with automatic cluster number. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2012, 9(2): 408–420 https://doi.org/10.1109/TCBB.2011.129
146
A Alush, J Goldberger. Ensemble segmentation using efficient integer linear programming. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(10): 1966–1977 https://doi.org/10.1109/TPAMI.2011.280
147
H Li, F Meng, Q Wu, B Luo. Unsupervised multiclass region cosegmentation via ensemble clustering and energy minimization. IEEE Transactions on Circuits and Systems for Video Technology, 2014, 24(5): 789–801 https://doi.org/10.1109/TCSVT.2013.2280851
148
X Zhang, L Jiao, F Liu, L Bo, M Gong. Spectral clustering ensemble applied to SAR image segmentation. IEEE Transactions on Geoscience and Remote Sensing, 2008, 46(7): 2126–2136 https://doi.org/10.1109/TGRS.2008.918647
149
J Jia, B Liu, L Jiao. Soft spectral clustering ensemble applied to image segmentation. Frontiers of Computer Science, 2011, 5(1): 66–78 https://doi.org/10.1007/s11704-010-0161-9
150
G Rafiee, S S Dlay, W L Woo. Region-of-interest extraction in low depth of field images using ensemble clustering and difference of Gaussian approaches. Pattern Recognition, 2013, 46(10): 2685–2699 https://doi.org/10.1016/j.patcog.2013.03.006
151
X Huang, X Zheng, W Yuan, F Wang, S Zhu. Enhanced clustering of biomedical documents using ensemble non-negative matrix factorization. Information Sciences, 2011, 181(11): 2293–2302 https://doi.org/10.1016/j.ins.2011.01.029
152
N Bassiou, V Moschou, C Kotropoulos. Speaker diarization exploiting the eigengap criterion and cluster ensembles. IEEE Transactions on Audio Speech and Language Processing, 2010, 18(8): 2134–2144 https://doi.org/10.1109/TASL.2010.2042121
153
W Zhuang, Y Ye, Y Chen, T Li. Ensemble clustering for internet security applications. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 2012, 42(6): 1784–1796 https://doi.org/10.1109/TSMCC.2012.2222025
154
C F Tsai, C Hung. Cluster ensembles in collaborative filtering recommendation. Applied Soft Computing, 2012, 12(4): 1417–1425 https://doi.org/10.1016/j.asoc.2011.11.016
155
Z Yu, P Luo, J You, H S Wong, H Leung, S Wu, J Zhang, G Han. Incremental semi-supervised clustering ensemble for high dimensional data clustering. IEEE Transactions on Knowledge and Data Engineering, 2016, 28(3): 701–714 https://doi.org/10.1109/TKDE.2015.2499200
156
Z Yu, Z Kuang, J Liu, H Chen, J Zhang, J You, H S Wong, G Han. Adaptive ensembling of semi-supervised clustering solutions. IEEE Transactions on Knowledge and Data Engineering, 2017, 29(8): 1577–1590 https://doi.org/10.1109/TKDE.2017.2695615
157
S Wei, Z Li, C Zhang. Combined constraint-based with metric-based in semi-supervised clustering ensemble. International Journal of Machine Learning and Cybernetics, 2018, 9(7): 1085–1100 https://doi.org/10.1007/s13042-016-0628-6
158
G Karypis, E H S Han, V Kumar. Chameleon: hierarchical clustering using dynamic modeling. Computer, 1999, 32(8): 68–75 https://doi.org/10.1109/2.781637
159
W Xiao, Y Yang, H Wang, T Li, H Xing. Semi-supervised hierarchical clustering ensemble and its application. Neurocomputing, 2016, 173: 1362–1376 https://doi.org/10.1016/j.neucom.2015.09.009
J Zhang, Y Yang, H Wang, A Mahmood, F Huang. Semi-supervised clustering ensemble based on collaborative training. In: Proceedings of International Conference on Rough Sets and Knowledge Technology. 2012, 450–455 https://doi.org/10.1007/978-3-642-31900-6_55
162
Z H Zhou, M Li. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11): 1529–1541 https://doi.org/10.1109/TKDE.2005.186
Z Yu, P Luo, J Liu, H S Wong, J You, G Han, J Zhang. Semisupervised ensemble clustering based on selected constraint projection. IEEE Transactions on Knowledge and Data Engineering, 2018, 30(12): 2394–2407 https://doi.org/10.1109/TKDE.2018.2818729
165
Y Yang, F Teng, T Li, H Wang, Q Zhang. Parallel semi-supervised multi-ant colonies clustering ensemble based on mapreduce methodology. IEEE Transactions on Cloud Computing, 2018, 6(3): 857–867 https://doi.org/10.1109/TCC.2015.2511724
166
A M Iqbal, A Moh’D, Z Khan. Semi-supervised clustering ensemble by voting. Computer Science, 2012, 2(9): 33–40
167
D Chen, Y Yang, H Wang, A Mahmood. Convergence analysis of semi-supervised clustering ensemble. In: Proceedings of International Conference on Information Science and Technology. 2014, 783–788 https://doi.org/10.1109/ICIST.2013.6747660
168
B Yan, C Domeniconi. Subspace metric ensembles for semisupervised clustering of high dimensional data. In: Proceedings of European Conference on Machine Learning. 2006, 509–520 https://doi.org/10.1007/11871842_48
169
A Mahmood, T Li, Y Yang, H Wang, M Afzal. Semi-supervised clustering ensemble for Web video categorization. In: Proceedings of International Workshop on Multiple Classifier Systems. 2013, 190–200 https://doi.org/10.1007/978-3-642-38067-9_17
170
A Mahmood, T Li, Y Yang, H Wang, M Afzal. Semi-supervised evolutionary ensembles for web video categorization. Knowledge-Based Systems, 2015, 76: 53–66 https://doi.org/10.1016/j.knosys.2014.11.030
171
A Junaidi, G A Fink. A semi-supervised ensemble learning approach for character labeling with minimal human effort. In: Proceedings of 2011 International Conference on Document Analysis and Recognition. 2011, 259–263
172
Z Yu, H S Wongb, J You, Q Yang, H Liao. Knowledge based cluster ensemble for cancer discovery from biomolecular data. IEEE Transactions on Nanobioscience, 2011, 10(2): 76–85 https://doi.org/10.1109/TNB.2011.2144997
173
Z Yu, H Chen, J You, H S Wong, J Liu, L Li, G Han. Double selection based semi-supervised clustering ensemble for tumor clustering from gene expression profiles. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2014, 11(4): 727–740 https://doi.org/10.1109/TCBB.2014.2315996
174
A Krogh, J Vedelsby. Neural network ensembles, cross validation and active learning. In: Proceedings of the 7th International Conference on Neural Information Processing Systems. 1994, 231–238
175
Z Yin, M Zhao, Y Wang, J Yang, J Zhang. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model. Computer Methods and Programs in Biomedicine, 2017, 140: 93–110 https://doi.org/10.1016/j.cmpb.2016.12.005
176
A Kumar, J Kim, D Lyndon, M Fulham, D Feng. An ensemble of fine-tuned convolutional neural networks for medical image classification. IEEE Journal of Biomedical and Health Informatics, 2017, 21(1): 31–40 https://doi.org/10.1109/JBHI.2016.2635663
177
W Liu, M Zhang, Z Luo, i Y Ca. An ensemble deep learning method for vehicle type classification on visual traffic surveillance sensors. IEEE Access, 2017, 5: 24417–24425 https://doi.org/10.1109/ACCESS.2017.2766203
178
C Kandaswamy, L M Silva, L A Alexandre, J M Santos. Deep transfer learning ensemble for classification. In: Proceedings of International Work-Conference on Artificial Neural Networks. 2015, 335–348 https://doi.org/10.1007/978-3-319-19258-1_29
179
D Nozza, E Fersini, E Messina. Deep learning and ensemble methods for domain adaptation. In: Proceedings of the 28th IEEE International Conference on Tools with Artificial Intelligence (ICTAI). 2016, 184–189 https://doi.org/10.1109/ICTAI.2016.0037
T Brys, A Harutyunyan, P Vrancx, A Nowé, M E Taylor. Multiobjectivization and ensembles of shapings in reinforcement learning. Neurocomputing, 2017, 263: 48–59 https://doi.org/10.1016/j.neucom.2017.02.096
182
X L Chen, L Cao, C X Li, Z X Xu, J Lai. Ensemble network architecture for deep reinforcement learning. Mathematical Problems in Engineering, 2018, 2018: 1–6 https://doi.org/10.1155/2018/2129393