Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2023, Vol. 17 Issue (6) : 176348    https://doi.org/10.1007/s11704-023-3076-y
Artificial Intelligence
Learning label-specific features for decomposition-based multi-class classification
Bin-Bin JIA1, Jun-Ying LIU1, Jun-Yi HANG2,3, Min-Ling ZHANG2,3()
1. College of Electrical and Information Engineering, Lanzhou University of Technology, Lanzhou 730050, China
2. School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
3. Key Laboratory of Computer Network and Information Integration (Ministry of Education), Southeast University, Nanjing 210096, China
 Download: PDF(3940 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Multi-class classification can be solved by decomposing it into a set of binary classification problems according to some encoding rules, e.g., one-vs-one, one-vs-rest, error-correcting output codes. Existing works solve these binary classification problems in the original feature space, while it might be suboptimal as different binary classification problems correspond to different positive and negative examples. In this paper, we propose to learn label-specific features for each decomposed binary classification problem to consider the specific characteristics containing in its positive and negative examples. Specifically, to generate the label-specific features, clustering analysis is respectively conducted on the positive and negative examples in each decomposed binary data set to discover their inherent information and then label-specific features for one example are obtained by measuring the similarity between it and all cluster centers. Experiments clearly validate the effectiveness of learning label-specific features for decomposition-based multi-class classification.

Keywords machine learning      multi-class classification      error-correcting output codes      label-specific features     
Corresponding Author(s): Min-Ling ZHANG   
About author:

* These authors contributed equally to this work.

Just Accepted Date: 14 June 2023   Issue Date: 04 July 2023
 Cite this article:   
Bin-Bin JIA,Jun-Ying LIU,Jun-Yi HANG, et al. Learning label-specific features for decomposition-based multi-class classification[J]. Front. Comput. Sci., 2023, 17(6): 176348.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-023-3076-y
https://academic.hep.com.cn/fcs/EN/Y2023/V17/I6/176348
  
Data set #Example #Label #Feature
Iris 150 3 4
Wine 178 3 13
Glass 214 6 9
Vowel 528 11 10
Dna 2000 3 180
Satimage 4435 6 36
Usps 7291 10 256
Pendigits 7494 10 16
Letter 15000 26 16
Shuttle 43500 7 9
Tab.1  Detailed characteristics of the employed MCC data sets
(a) Accuracy
Data set OvO OvR ECOC MSVM
Specific Original Specific Original Specific Original
Iris 0.947±0.053 0.820±0.077 0.940±0.049 0.813±0.093 0.960±0.056 0.767±0.047 0.947±0.053
Wine 0.983±0.027 0.966±0.029 0.989±0.024 0.983±0.027 0.977±0.029 0.949±0.032 0.966±0.039
Glass 0.640±0.093 0.620±0.087 0.677±0.120 0.472±0.128 0.706±0.122 0.579±0.081 0.594±0.096
Vowel 0.915±0.029 0.750±0.048 0.723±0.062 0.398±0.083 0.852±0.059 0.441±0.060 0.703±0.071
Dna 0.902±0.060 0.863±0.036 0.883±0.045 0.879±0.032 0.855±0.124 0.653±0.041 0.916±0.023
Satimage 0.851±0.064 0.864±0.018 0.884±0.028 0.778±0.025 0.894±0.009 0.795±0.035 0.858±0.021
Usps 0.960±0.006 0.958±0.010 0.942±0.018 0.788±0.016 0.961±0.005 0.910±0.007 0.944±0.011
Pendigits 0.993±0.002 0.983±0.004 0.983±0.020 0.797±0.013 0.993±0.003 0.864±0.017 0.965±0.008
Letter 0.908±0.009 0.829±0.014 0.843±0.027 0.527±0.010 0.838±0.015 0.475±0.022 0.781±0.016
Shuttle 0.993±0.013 0.882±0.007 0.997±0.001 0.789±0.037 0.997±0.001 0.764±0.049 0.980±0.001
(b) Average-F1
Data set OvO OvR ECOC MSVM
Specific Original Specific Original Specific Original
Iris 0.942±0.058 0.807±0.068 0.935±0.054 0.811±0.076 0.956±0.064 0.738±0.066 0.941±0.058
Wine 0.979±0.035 0.960±0.036 0.985±0.032 0.983±0.027 0.975±0.034 0.947±0.032 0.960±0.045
Glass 0.550±0.121 0.577±0.117 0.563±0.149 0.473±0.095 0.628±0.123 0.528±0.082 0.557±0.114
Vowel 0.916±0.028 0.727±0.054 0.707±0.064 0.356±0.075 0.853±0.054 0.399±0.051 0.664±0.077
Dna 0.884±0.075 0.856±0.037 0.862±0.065 0.873±0.034 0.842±0.125 0.652±0.041 0.906±0.025
Satimage 0.824±0.070 0.823±0.023 0.851±0.033 0.649±0.024 0.861±0.019 0.665±0.049 0.782±0.020
Usps 0.955±0.006 0.952±0.012 0.938±0.020 0.762±0.018 0.956±0.005 0.900±0.008 0.937±0.013
Pendigits 0.993±0.002 0.983±0.004 0.982±0.022 0.788±0.010 0.993±0.003 0.860±0.018 0.965±0.008
Letter 0.906±0.009 0.826±0.014 0.846±0.027 0.510±0.009 0.839±0.014 0.442±0.025 0.775±0.015
Shuttle 0.799±0.080 0.478±0.060 0.704±0.114 0.305±0.069 0.716±0.055 0.280±0.066 0.660±0.067
Tab.2  Detailed experimental results (mean±std) where the employed binary classifier is SVM
(a) Accuracy
Data set OvO OvR ECOC Softmax
Specific Original Specific Original Specific Original
Iris 0.947±0.053 0.953±0.055 0.933±0.070 0.893±0.084 0.907±0.105 0.707±0.064 0.933±0.070
Wine 0.983±0.027 0.983±0.027 0.983±0.027 0.989±0.023 0.972±0.040 0.972±0.040 0.983±0.027
Glass 0.659±0.093 0.607±0.117 0.696±0.127 0.575±0.093 0.668±0.127 0.552±0.093 0.603±0.108
Vowel 0.849±0.048 0.797±0.062 0.776±0.042 0.532±0.068 0.803±0.034 0.356±0.070 0.635±0.061
Dna 0.933±0.018 0.931±0.014 0.930±0.018 0.944±0.022 0.906±0.023 0.912±0.023 0.937±0.016
Satimage 0.900±0.013 0.868±0.020 0.890±0.013 0.839±0.019 0.893±0.015 0.805±0.025 0.860±0.021
Usps 0.965±0.006 0.965±0.007 0.948±0.007 0.951±0.010 0.960±0.007 0.914±0.012 0.953±0.008
Pendigits 0.990±0.004 0.979±0.005 0.987±0.004 0.943±0.008 0.989±0.003 0.869±0.015 0.959±0.008
Letter 0.911±0.012 0.835±0.013 0.862±0.013 0.718±0.015 0.883±0.011 0.445±0.022 0.767±0.014
Shuttle 0.996±0.001 0.966±0.003 0.993±0.002 0.929±0.003 0.989±0.003 0.854±0.069 0.966±0.002
(b) Average-F1
Data set OvO OvR ECOC Softmax
Specific Original Specific Original Specific Original
Iris 0.943±0.058 0.951±0.059 0.934±0.067 0.891±0.077 0.901±0.116 0.681±0.071 0.930±0.070
Wine 0.979±0.035 0.980±0.032 0.979±0.035 0.989±0.024 0.970±0.041 0.969±0.042 0.977±0.037
Glass 0.540±0.130 0.532±0.132 0.592±0.164 0.483±0.110 0.524±0.124 0.422±0.097 0.528±0.123
Vowel 0.849±0.042 0.771±0.066 0.771±0.051 0.495±0.078 0.796±0.051 0.313±0.074 0.599±0.061
Dna 0.923±0.022 0.921±0.016 0.920±0.023 0.936±0.024 0.892±0.029 0.904±0.024 0.928±0.019
Satimage 0.873±0.017 0.827±0.021 0.862±0.015 0.753±0.021 0.864±0.018 0.688±0.033 0.812±0.025
Usps 0.961±0.007 0.961±0.009 0.942±0.008 0.945±0.012 0.955±0.007 0.906±0.012 0.947±0.009
Pendigits 0.990±0.004 0.979±0.004 0.987±0.004 0.942±0.008 0.988±0.003 0.866±0.015 0.959±0.007
Letter 0.909±0.012 0.832±0.014 0.859±0.013 0.712±0.015 0.882±0.011 0.406±0.023 0.762±0.014
Shuttle 0.761±0.053 0.625±0.078 0.650±0.083 0.511±0.064 0.562±0.093 0.374±0.124 0.602±0.068
Tab.3  Detailed experimental results (mean±std) where the employed binary classifier is LR
Decomposition strategy SVM LR
Accuracy Average-F1 Accuracy Average-F1
OvO win[9.77e-03] win[1.95e-02] win[1.95e-02] win[3.71e-02]
OvR win[1.95e-03] win[5.86e-03] win[2.73e-02] win[2.73e-02]
ECOC win[1.95e-03] win[1.95e-03] win[7.81e-03] win[5.86e-03]
Tab.4  Wilcoxon signed-ranks test for “Specific” against “Original” (at 0.05 significance level; p-values shown in the brackets)
Fig.1  Performance of the three decomposition-based MCC methods change when the value of r increases from 0.01 to 0.3. (a) Iris; (b) glass; (c) satimage; (d) usps; (e) pendigits; (f) letter
(a) Accuracy
Data set OvO OvR ECOC MSVM
Iris 0.940/0.943±0.003/0.947 0.940/0.945±0.005/0.953 0.933/0.945±0.007/0.953 0.947±0.053
Wine 0.972/0.974±0.003/0.977 0.972/0.978±0.004/0.983 0.966/0.975±0.005/0.983 0.966±0.039
Glass 0.636/0.649±0.009/0.664 0.640/0.672±0.015/0.692 0.668/0.675±0.007/0.692 0.594±0.096
Vowel 0.869/0.889±0.013/0.911 0.701/0.714±0.012/0.742 0.852/0.862±0.008/0.879 0.703±0.071
Dna 0.875/0.905±0.016/0.924 0.896/0.910±0.008/0.926 0.782/0.858±0.030/0.887 0.916±0.023
Satimage 0.849/0.864±0.013/0.886 0.836/0.877±0.016/0.889 0.888/0.894±0.003/0.898 0.858±0.021
Usps 0.952/0.956±0.003/0.960 0.920/0.934±0.008/0.945 0.959/0.961±0.001/0.963 0.944±0.011
Pendigits 0.983/0.988±0.002/0.991 0.974/0.985±0.006/0.992 0.992/0.992±0.000/0.993 0.965±0.008
Letter 0.898/0.903±0.003/0.909 0.816/0.837±0.010/0.850 0.825/0.835±0.005/0.841 0.781±0.016
Shuttle 0.992/0.996±0.002/0.998 0.997/0.997±0.000/0.997 0.997/0.997±0.000/0.997 0.980±0.001
(b) Average-F1
Data set OvO OvR ECOC MSVM
Iris 0.933/0.938±0.005/0.944 0.935/0.942±0.007/0.952 0.928/0.941±0.008/0.952 0.941±0.058
Wine 0.968/0.970±0.003/0.974 0.969/0.974±0.004/0.979 0.963/0.972±0.005/0.980 0.960±0.045
Glass 0.539/0.556±0.009/0.568 0.529/0.562±0.023/0.598 0.553/0.573±0.014/0.598 0.557±0.114
Vowel 0.875/0.889±0.013/0.911 0.681/0.698±0.016/0.735 0.852/0.861±0.008/0.877 0.664±0.077
Dna 0.861/0.890±0.018/0.912 0.871/0.894±0.012/0.914 0.758/0.842±0.033/0.875 0.906±0.025
Satimage 0.816/0.832±0.012/0.854 0.802/0.845±0.018/0.861 0.858/0.866±0.005/0.873 0.782±0.020
Usps 0.946/0.951±0.003/0.956 0.916/0.930±0.008/0.941 0.954/0.956±0.001/0.958 0.937±0.013
Pendigits 0.983/0.988±0.002/0.991 0.973/0.985±0.006/0.992 0.992/0.992±0.000/0.993 0.965±0.008
Letter 0.895/0.902±0.004/0.907 0.825/0.842±0.008/0.853 0.827/0.836±0.004/0.841 0.775±0.015
Shuttle 0.783/0.815±0.016/0.845 0.718/0.745±0.019/0.778 0.666/0.697±0.019/0.722 0.660±0.067
Tab.5  The experimental results (minimum/mean±std/maximum) for stability analysis where the employed binary classifier is SVM
(a) Accuracy
Data set OvO OvR ECOC Softmax
Iris 0.933/0.942±0.005/0.953 0.920/0.930±0.007/0.940 0.900/0.916±0.010/0.933 0.933±0.070
Wine 0.972/0.979±0.006/0.983 0.977/0.982±0.002/0.983 0.972/0.977±0.003/0.983 0.983±0.027
Glass 0.650/0.661±0.008/0.673 0.663/0.682±0.012/0.705 0.640/0.656±0.012/0.673 0.603±0.108
Vowel 0.850/0.859±0.005/0.869 0.763/0.785±0.011/0.801 0.782/0.791±0.009/0.807 0.635±0.061
Dna 0.931/0.936±0.003/0.941 0.926/0.929±0.002/0.932 0.903/0.908±0.003/0.912 0.937±0.016
Satimage 0.900/0.901±0.001/0.903 0.888/0.890±0.001/0.892 0.886/0.889±0.001/0.891 0.860±0.021
Usps 0.965/0.966±0.001/0.968 0.948/0.949±0.001/0.950 0.960/0.961±0.001/0.962 0.953±0.008
Pendigits 0.990/0.990±0.000/0.991 0.986/0.987±0.001/0.988 0.988/0.988±0.000/0.989 0.959±0.008
Letter 0.908/0.909±0.001/0.910 0.860/0.862±0.002/0.864 0.882/0.883±0.001/0.885 0.767±0.014
Shuttle 0.995/0.995±0.000/0.996 0.992/0.993±0.000/0.994 0.989/0.990±0.001/0.991 0.966±0.002
(b)Average-F1
Data set OvO OvR ECOC Softmax
Iris 0.933/0.939±0.006/0.952 0.918/0.929±0.008/0.941 0.897/0.912±0.010/0.931 0.930±0.070
Wine 0.967/0.974±0.006/0.981 0.974/0.978±0.002/0.979 0.969/0.974±0.003/0.980 0.977±0.037
Glass 0.537/0.547±0.008/0.558 0.553/0.584±0.019/0.624 0.530/0.552±0.017/0.576 0.528±0.123
Vowel 0.845/0.856±0.006/0.866 0.759/0.779±0.011/0.793 0.772/0.786±0.010/0.800 0.599±0.061
Dna 0.920/0.926±0.003/0.931 0.916/0.919±0.002/0.923 0.890/0.894±0.003/0.899 0.928±0.019
Satimage 0.873/0.875±0.001/0.877 0.860/0.862±0.001/0.865 0.856/0.860±0.002/0.862 0.812±0.025
Usps 0.961/0.963±0.001/0.965 0.942/0.943±0.001/0.944 0.955/0.957±0.001/0.958 0.947±0.009
Pendigits 0.990/0.990±0.000/0.990 0.986/0.987±0.001/0.988 0.988/0.988±0.000/0.989 0.959±0.007
Letter 0.907/0.908±0.001/0.909 0.857/0.860±0.002/0.862 0.880/0.882±0.001/0.883 0.762±0.014
Shuttle 0.746/0.753±0.007/0.763 0.622/0.650±0.014/0.665 0.529/0.543±0.010/0.561 0.602±0.068
Tab.6  The experimental results (minimum/mean±std/maximum) for stability analysis where the employed binary classifier is LR
  
  
  
  
1 Z H Zhou . Machine Learning. Singapore: Springer, 2021
2 J, Han J, Pei H Tong . Data Mining: Concepts and Techniques. 4th ed. Cambridge: Morgan Kaufmann, 2022
3 Z H Zhou . Open-environment machine learning. National Science Review, 2022, 9( 8): nwac123
4 B, Zhang J, Zhu H Su . Toward the third generation artificial intelligence. Science China Information Sciences, 2023, 66( 2): 121101
5 Zhao L, Song Y, Zhu Y, Zhang C, Zheng Y. Face recognition based on multi-class SVM. In: Proceedings of 2009 Chinese Control and Decision Conference. 2009, 5871−5873
6 K, Wu F, Jia Y Han . Domain-specific feature elimination: multi-source domain adaptation for image classification. Frontiers of Computer Science, 2023, 17( 4): 174705
7 T Y, Wang H M Chiang . Fuzzy support vector machine for multi-class text categorization. Information Processing & Management, 2007, 43( 4): 914–929
8 A, Moreo A, Esuli F Sebastiani . Word-class embeddings for multiclass text classification. Data Mining and Knowledge Discovery, 2021, 35( 3): 911–963
9 Frid A, Manevitz L, Mosafi O. Multi-class classification in parkinson’s disease by leveraging internal topological structure of the data and of the label space. In: Proceedings of 2019 International Joint Conference on Neural Networks. 2019, 1−9
10 K, Wei T, Li F, Huang J, Chen Z He . Cancer classification with data augmentation based on generative adversarial networks. Frontiers of Computer Science, 2022, 16( 2): 162601
11 G, Tsoumakas I, Katakis I Vlahavas . Random k-labelsets for multilabel classification. IEEE Transactions on Knowledge and Data Engineering, 2011, 23( 7): 1079–1089
12 M L, Zhang Y K, Li H, Yang X Y Liu . Towards class-imbalance aware multi-label learning. IEEE Transactions on Cybernetics, 2022, 52( 6): 4459–4471
13 J, Read L, Martino D Luengo . Efficient monte carlo methods for multi-dimensional learning with classifier chains. Pattern Recognition, 2014, 47( 3): 1535–1546
14 B B, Jia M L Zhang . Multi-dimensional classification via stacked dependency exploitation. Science China Information Sciences, 2020, 63( 12): 222102
15 B B, Jia M L Zhang . Multi-dimensional classification via selective feature augmentation. Machine Intelligence Research, 2022, 19( 1): 38–51
16 A C, Lorena Carvalho A C P L F, De J M P Gama . A review on the combination of binary classifiers in multiclass problems. Artificial Intelligence Review, 2008, 30( 1−4): 19–37
17 C W, Hsu C J Lin . A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 2002, 13( 2): 415–425
18 K, Duan S Keerthi . Which is the best multiclass SVM method? An empirical study. In: Proceedings of the 6th International Workshop on Multiple Classifier Systems. 2005, 278−285
19 T G, Dietterich G Bakiri . Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 1995, 2: 263–286
20 E L, Allwein R E, Schapire Y Singer . Reducing multiclass to binary: a unifying approach for margin classifiers. Journal of Machine Learning Research, 2000, 1: 113–141
21 O, Pujol P, Radeva J Vitrià . Discriminant ECOC: a heuristic method for application dependent design of error correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28( 6): 1007–1012
22 S, Escalera D M J, Tax O, Pujol P, Radeva R P W Duin . Subclass problem-dependent design for error-correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30( 6): 1041–1054
23 S, Escalera O, Pujol P Radeva . On the decoding process in ternary error-correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32( 1): 120–134
24 O, Pujol S, Escalera P Radeva . An incremental node embedding technique for error correcting output codes. Pattern Recognition, 2008, 41( 2): 713–725
25 Y, Lecun L, Bottou Y, Bengio P Haffner . Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86( 11): 2278–2324
26 C, Cortes V Vapnik . Support-vector networks. Machine Learning, 1995, 20( 3): 273–297
27 J Y, Liu B B Jia . Combining one-vs-one decomposition and instance-based learning for multi-class classification. IEEE Access, 2020, 8: 197499–197507
28 Z, Wang X Xue . Multi-class support vector machine. In: Ma Y Q, Guo G D, eds. Support Vector Machines Applications. Cham: Springer, 2014, 23−48
29 T, Hastie S, Rosset J, Zhu H Zou . Multi-class adaboost. Statistics and Its Interface, 2009, 2( 3): 349–360
30 F, Zheng H, Xue X, Chen Y Wang . Maximum margin tree error correcting output codes. In: Proceedings of the 14th Pacific Rim International Conference on Artificial Intelligence. 2016, 681−691
31 F, Zheng H Xue . Subclass maximum margin tree error correcting output codes. In Proceedings of the 15th Pacific Rim International Conference on Artificial Intelligence. 2018, 454−462
32 S, Kang S, Cho P Kang . Constructing a multi-class classifier using one-against-one approach with different binary classifiers. Neurocomputing, 2015, 149: 677–682
33 M, Liu D, Zhang S, Chen H Xue . Joint binary classifier learning for ECOC-based multi-class classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38( 11): 2335–2341
34 M L, Zhang L Wu . LIFT: multi-label learning with label-specific features. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37( 1): 107–120
35 A K, Jain M N, Murty P J Flynn . Data clustering: a review. ACM Computing Surveys, 1999, 31( 3): 264–323
36 R E, Fan K W, Chang C J, Hsieh X R, Wang C J Lin . LIBLINEAR: a library for large linear classification. Journal of Machine Learning Research, 2008, 9: 1871–1874
37 K, Crammer Y Singer . On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2001, 2: 265–292
38 A J, Dobson A G Barnett . An Introduction to Generalized Linear Models. 4th ed. Boca Raton: Chapman and Hall/CRC, 2018
39 J Demšar . Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, 2006, 7: 1–30
40 S, Wang X Yao . Multiclass imbalance problems: analysis and potential solutions. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 2012, 42( 4): 1119–1130
[1] FCS-23076-OF-BJ_suppl_1 Download
[1] Wenzheng BAO, Bin YANG. Protein acetylation sites with complex-valued polynomial model[J]. Front. Comput. Sci., 2024, 18(3): 183904-.
[2] Yan LIN, Jiashu WANG, Xiaowei LIU, Xueqin XIE, De WU, Junjie ZHANG, Hui DING. A computational model to identify fertility-related proteins using sequence information[J]. Front. Comput. Sci., 2024, 18(1): 181902-.
[3] Lerina AVERSANO, Mario Luca BERNARDI, Marta CIMITILE, Martina IAMMARINO, Debora MONTANO. Forecasting technical debt evolution in software systems: an empirical study[J]. Front. Comput. Sci., 2023, 17(3): 173210-.
[4] Zhengxiong HOU, Hong SHEN, Xingshe ZHOU, Jianhua GU, Yunlan WANG, Tianhai ZHAO. Prediction of job characteristics for intelligent resource allocation in HPC systems: a survey and future directions[J]. Front. Comput. Sci., 2022, 16(5): 165107-.
[5] Zhen SONG, Yu GU, Zhigang WANG, Ge YU. DRPS: efficient disk-resident parameter servers for distributed machine learning[J]. Front. Comput. Sci., 2022, 16(4): 164321-.
[6] Yu OU, Lang LI. Side-channel analysis attacks based on deep learning network[J]. Front. Comput. Sci., 2022, 16(2): 162303-.
[7] Yi REN, Ning XU, Miaogen LING, Xin GENG. Label distribution for multimodal machine learning[J]. Front. Comput. Sci., 2022, 16(1): 161306-.
[8] Suyu MEI. A framework combines supervised learning and dense subgraphs discovery to predict protein complexes[J]. Front. Comput. Sci., 2022, 16(1): 161901-.
[9] Xinyu TONG, Ziao YU, Xiaohua TIAN, Houdong GE, Xinbing WANG. Improving accuracy of automatic optical inspection with machine learning[J]. Front. Comput. Sci., 2022, 16(1): 161310-.
[10] Xiaobing SUN, Tianchi ZHOU, Rongcun WANG, Yucong DUAN, Lili BO, Jianming CHANG. Experience report: investigating bug fixes in machine learning frameworks/libraries[J]. Front. Comput. Sci., 2021, 15(6): 156212-.
[11] Xia-an BI, Yiming XIE, Hao WU, Luyun XU. Identification of differential brain regions in MCI progression via clustering-evolutionary weighted SVM ensemble algorithm[J]. Front. Comput. Sci., 2021, 15(6): 156903-.
[12] Yan-Ping SUN, Min-Ling ZHANG. Compositional metric learning for multi-label classification[J]. Front. Comput. Sci., 2021, 15(5): 155320-.
[13] Jian SUN, Pu-Feng DU. Predicting protein subchloroplast locations: the 10th anniversary[J]. Front. Comput. Sci., 2021, 15(2): 152901-.
[14] Syed Farooq ALI, Muhammad Aamir KHAN, Ahmed Sohail ASLAM. Fingerprint matching, spoof and liveness detection: classification and literature review[J]. Front. Comput. Sci., 2021, 15(1): 151310-.
[15] Hui XUE, Haiming XU, Xiaohong CHEN, Yunyun WANG. A primal perspective for indefinite kernel SVM problem[J]. Front. Comput. Sci., 2020, 14(2): 349-363.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed