Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

邮发代号 80-970

2019 Impact Factor: 1.275

Frontiers of Computer Science  2025, Vol. 19 Issue (1): 191304   https://doi.org/10.1007/s11704-023-3272-9
  本期目录
Revisiting multi-dimensional classification from a dimension-wise perspective
Yi SHI1, Hanjia YE1(), Dongliang MAN2,3, Xiaoxu HAN2,3, Dechuan ZHAN1, Yuan JIANG1
1. National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210046, China
2. Department of Laboratory Medicine, The First Hospital of China Medical University, Shenyang 110001, China
3. National Clinical Research Center for Laboratory Medicine, The First Hospital of China Medical University, Shenyang 110001, China
 全文: PDF(13655 KB)   HTML
Abstract

Real-world objects exhibit intricate semantic properties that can be characterized from a multitude of perspectives, which necessitates the development of a model capable of discerning multiple patterns within data, while concurrently predicting several Labeling Dimensions (LDs) — a task known as Multi-dimensional Classification (MDC). While the class imbalance issue has been extensively investigated within the multi-class paradigm, its study in the MDC context has been limited due to the imbalance shift phenomenon. A sample’s classification as a minor or major class instance becomes ambiguous when it belongs to a minor class in one LD and a major class in another. Previous MDC methodologies predominantly emphasized instance-wise criteria, neglecting prediction capabilities from a dimension aspect, i.e., the average classification performance across LDs. We assert the significance of dimension-wise metrics in real-world MDC applications and introduce two such metrics. Furthermore, we observe imbalanced class distributions within each LD and propose a novel Imbalance-Aware fusion Model (IMAM) for addressing the MDC problem. Specifically, we first decompose the task into multiple multi-class classification problems, creating imbalance-aware deep models for each LD separately. This straightforward method performs well across LDs without sacrificing performance in instance-wise criteria. Subsequently, we employ LD-wise models as multiple teachers and transfer their knowledge across all LDs to a unified student model. Experimental results on several real-world datasets demonstrate that our IMAM approach excels in both instance-wise evaluations and the proposed dimension-wise metrics.

Key wordsmulti-dimensional classification    dimension perspective    class imbalance learning
收稿日期: 2023-04-03      出版日期: 2024-03-14
Corresponding Author(s): Hanjia YE   
 引用本文:   
. [J]. Frontiers of Computer Science, 2025, 19(1): 191304.
Yi SHI, Hanjia YE, Dongliang MAN, Xiaoxu HAN, Dechuan ZHAN, Yuan JIANG. Revisiting multi-dimensional classification from a dimension-wise perspective. Front. Comput. Sci., 2025, 19(1): 191304.
 链接本文:  
https://academic.hep.com.cn/fcs/CN/10.1007/s11704-023-3272-9
https://academic.hep.com.cn/fcs/CN/Y2025/V19/I1/191304
Fig.1  
Measure Formulation Note
Instance Accuracy (I-Acc) 1Ni=1N[[n(i)=L]] The probability that labels from all L LDs are correctly predicted
Hamming Accuracy (H-Acc) 1Ni=1N1L?n(i) The probability that labels from any L LDs are correctly predicted
meanMacroF1 (M2F1) MacroPrecision(j)=k=1Kji=1N[[y^ij=yij=k]]i=1N[[y^ij=k]]MacroRecall(j)=k=1Kji=1N[[y^ij=yij=k]]i=1N[[yij=k]]j=1L21/MacroPrecision(j)+1/MacroRecall(j) Averaged MacroF1, the harmonic mean of Macro-Precision and Macro-Recall, across LDs
meanMacroAUC (M2AUC) j=1L(k=1Kji=1N[[f^ijk=max(f^ij1,f^ij2,,f^ijKj)]]i=1N[[yij=k]]?i=1N[[yijk]]) Averaged MacroAUC across LDs
Tab.1  
Fig.2  
Fig.3  
Med H-Acc I-Acc M2F1 M2AUC
KRAM 91.11 74.62 72.38 94.33
LEFA 90.75 75.27 69.88 92.84
M-Head 90.66 77.29 76.78 97.00
Zappos H-Acc I-Acc M2F1 M2AUC
KRAM 67.23 50.93 44.46 78.69
LEFA 67.66 50.12 44.85 80.87
M-Head 66.41 47.89 43.24 87.35
Tab.2  
Med H-Acc I-Acc M2F1 M2AUC
M-Head 90.66 77.29 76.78 95.20
M-Emb 91.83 78.08 78.66 96.98
M-HeadImb 92.05 78.73 78.59 97.00
M-EmbImb 91.77 77.95 78.32 96.73
Zappos H-Acc I-Acc M2F1 M2AUC
M-Head 66.41 47.89 43.24 87.35
M-Emb 67.16 48.87 45.58 88.36
M-HeadImb 68.26 49.76 45.75 88.75
M-EmbImb 66.95 48.12 45.45 88.00
Tab.3  
Fig.4  
Name # of Instance # of LD # of Class per LD
Med 4567 3 4, 5, 2
Zappos [60] 40020 2 11, 13
Calligraphy [62] 23195 2 14, 5
Letter [63] 13634 3 26, 10, 9
Tab.4  
Method Med Zappos Calligraphy Letter
H-Acc I-Acc M2F1 M2AUC H-Acc I-Acc M2F1 M2AUC H-Acc I-Acc M2F1 M2AUC H-Acc I-Acc M2F1 M2AUC
BR 90.63 74.62 70.09 92.69 66.54 47.89 43.35 85.87 80.65 72.32 78.71 96.62 71.00 44.25 67.37 92.07
ECC 87.91 68.35 51.25 84.69 60.74 37.86 35.44 69.53 74.27 71.94 71.19 87.38 65.45 40.06 59.34 86.53
KRAM 91.11 74.62 72.38 94.33 67.23 48.65 44.46 78.69 81.40 73.60 78.88 94.53 72.03 44.05 68.12 92.07
LEFA 90.75 75.27 69.88 92.84 67.66 48.36 44.85 80.87 80.42 72.85 77.06 91.86 72.33 42.80 66.88 91.75
M-Head 90.66 77.29 76.78 95.20 66.41 47.89 43.24 87.35 81.12 73.01 78.97 97.30 72.66 47.31 68.75 94.30
M-Emb 91.83 78.08 78.66 96.98 67.16 48.87 45.58 88.36 83.21 75.68 81.39 97.54 74.49 49.57 71.17 94.87
M-HeadImb 92.05 78.73 78.59 95.60 68.26 49.76 45.75 88.75 82.86 75.24 81.49 97.54 75.33 50.73 71.46 94.60
M-EmbImb 91.77 77.95 78.32 96.73 66.95 48.12 45.45 88.00 82.22 74.56 80.40 97.47 73.52 49.33 70.58 94.37
IMAM 93.10 80.82 81.79 97.71 68.33 50.01 45.90 89.32 83.44 76.06 81.98 97.61 81.48 61.43 79.21 96.70
Tab.5  
H-Acc I-Acc M2F1 M2AUC
M-HeadImb 92.05 78.73 78.56 95.60
M-Emb 91.83 78.08 78.66 96.98
IMAM1st 93.05 80.11 78.88 97.42
IMAMCLS?Distill 92.77 80.53 78.95 97.63
IMAMEMB?Distill 92.14 78.51 78.74 97.52
IMAM 93.10 80.82 81.79 97.71
Tab.6  
Fig.5  
  
  
  
  
  
  
  
  
Med H-Acc I-Acc M2F1 M2AUC
M-Head 90.66 77.29 76.78 95.20
M-Emb 91.83 78.08 78.66 96.98
M-HeadCDT 92.05 78.73 78.59 97.00
M-EmbCDT 91.77 77.95 78.32 96.73
M-HeadDRW 91.46 78.02 76.93 95.39
M-EmbDRW 91.63 77.87 78.34 96.21
M-HeadBS 91.01 77.93 77.03 95.37
M-EmbBS 91.93 78.45 78.37 96.93
Zappos H-Acc I-Acc M2F1 M2AUC
M-Head 66.41 47.89 43.24 87.35
M-Emb 67.16 48.87 45.58 88.36
M-HeadCDT 68.26 49.76 45.75 88.75
M-EmbCDT 66.95 48.12 45.45 88.00
M-HeadDRW 67.03 48.02 44.12 87.94
M-EmbDRW 66.35 47.32 44.45 87.63
M-HeadBS 67.03 48.63 43.45 87.82
M-EmbBS 67.35 49.26 45.34 87.49
  
λ1 H-Acc I-Acc M2F1 M2AUC
0.1 92.33 78.26 81.02 96.49
1 92.68 78.45 81.33 96.36
5 93.10 80.82 81.79 97.71
10 90.25 76.32 79.93 95.28
λ2 H-Acc I-Acc M2F1 M2AUC
0.1 90.03 77.93 78.36 95.26
1 93.10 80.82 81.79 97.71
5 91.25 78.61 78.02 95.39
10 88.04 74.37 73.68 92.47
  
Zappos H-Acc I-Acc M2F1 M2AUC
IMAM1st 66.86 47.41 45.72 88.31
IMAMCLS?Distill 68.13 49.32 45.24 88.66
IMAMEMB?Distill 67.95 49.20 44.27 88.34
IMAM 68.33 50.01 45.90 89.32
Letter H-Acc I-Acc M2F1 M2AUC
IMAM1st 86.48 67.16 85.63 96.72
IMAMCLS?Distill 80.29 61.28 78.34 96.20
IMAMEMB?Distill 80.74 61.35 78.39 96.12
IMAM 81.48 61.43 79.21 96.70
  
Epoch H-Acc I-Acc M2F1 M2AUC
200 91.91 80.39 79.97 97.64
300 93.08 80.97 79.09 97.56
400 93.10 80.82 81.79 97.71
500 92.96 80.82 80.46 97.69
  
Zappos H-Acc I-Acc M2F1 M2AUC
KRAM + M-Head 67.23 48.65 44.46 78.69
KRAM + M-HeadImb 67.68 49.04 45.12 80.33
LEFA + M-Head 67.66 48.36 44.85 80.87
LEFA + M-HeadImb 67.78 49.07 44.95 80.94
IMAM 68.33 50.01 45.90 89.32
Letter H-Acc I-Acc M2F1 M2AUC
KRAM + M-Head 72.03 44.05 68.12 92.07
KRAM + M-HeadImb 72.59 45.35 68.40 92.64
LEFA + M-Head 72.33 42.80 66.88 91.75
LEFA + M-HeadImb 73.30 43.47 67.63 92.82
IMAM 81.48 61.43 79.21 96.70
  
1 C, Zhang D, Yankov C T, Wu S, Shapiro J, Hong W Wu . What is that building?: an end-to-end system for building recognition from streetside images. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 2425−2433
2 D, Sahoo H, Wang K, Shu X, Wu H, Le P, Achananuparp E P, Lim S C H Hoi . FoodAI: food image recognition via deep learning for smart food logging. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019, 2260−2268
3 F, Borisyuk A, Gordo V Sivakumar . Rosetta: large scale system for text detection and recognition in images. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 71−79
4 X, Yang Z, Zeng S G, Teo L, Wang V, Chandrasekhar S Hoi . Deep learning for practical image recognition: case study on kaggle competitions. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 923−931
5 Z, Wang C, Long G, Cong C Ju . Effective and efficient sports play retrieval with deep representation learning. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019, 499−509
6 J T, Huang A, Sharma S, Sun L, Xia D, Zhang P, Pronin J, Padmanabhan G, Ottaviano L Yang . Embedding-based retrieval in facebook search. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 2553−2561
7 X, Jia H, Zhao Z, Lin A, Kale V Kumar . Personalized image retrieval with sparse graph representation learning. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 2735−2743
8 R, Tan M I, Vasileva K, Saenko B A Plummer . Learning similarity conditions without explicit supervision. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 10373−10382
9 Y L, Lin S D, Tran L S Davis . Fashion outfit complementary item retrieval. In: Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 3311−3319
10 D, Kim K, Saito S, Mishra S, Sclaroff K, Saenko B A Plummer . Self-supervised visual attribute learning for fashion compatibility. In: Proceedings of 2021 IEEE/CVF International Conference on Computer Vision. 2021, 1057−1066
11 Z, Wang Y, Wang B, Feng D, Mudigere B, Muthiah Y Ding . El-rec: efficient large-scale recommendation model training via tensor-train embedding table. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 2022, 1−14
12 I Kononenko . Machine learning for medical diagnosis: history, state of the art and perspective. Artificial Intelligence in Medicine, 2001, 23( 1): 89–109
13 F, Amato A, López Peña-Méndez E, María P, Vaňhara A, Hampl J Havel . Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine, 2013, 11( 2): 47–58
14 D, Turnbull L, Barrington D, Torres G Lanckriet . Semantic annotation and retrieval of music and sound effects. IEEE Transactions on Audio, Speech, and Language Processing, 2008, 16( 2): 467–476
15 F, Serafino G, Pio M, Ceci D Malerba . Hierarchical multidimensional classification of web documents with MultiWebClass. In: Proceedings of the 18th International Conference on Digital Society. 2015, 236−250
16 H, Shatkay F, Pan A, Rzhetsky W J Wilbur . Multi-dimensional classification of biomedical text: toward automated, practical provision of high-utility text to diverse users. Bioinformatics, 2008, 24( 18): 2086–2093
17 Z, Barutcuoglu R E, Schapire O G Troyanskaya . Hierarchical multi-label prediction of gene function. Bioinformatics, 2006, 22( 7): 830–836
18 B, Feng Y, Wang Y Ding . Saga: sparse adversarial attack on EEG-based brain computer interface. In: Proceedings of 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. 2021, 975−979
19 B B, Jia M L Zhang . Multi-dimensional classification via sparse label encoding. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 4917−4926
20 H, Wang C, Chen W, Liu K, Chen T, Hu G Chen . Incorporating label embedding and feature augmentation for multi-dimensional classification. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020, 6178−6185
21 Z, Ma S Chen . Multi-dimensional classification via a metric approach. Neurocomputing, 2018, 275: 1121–1131
22 J, Read B, Pfahringer G, Holmes E Frank . Classifier chains for multi-label classification. Machine Learning, 2011, 85( 3): 333–359
23 M L, Zhang Z H Zhou . A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 2014, 26( 8): 1819–1837
24 B B, Jia M L Zhang . Multi-dimensional classification via kNN feature augmentation. Pattern Recognition, 2020, 106: 107423
25 T F, Wu C J, Lin R C Weng . Probability estimates for multi-class classification by pairwise coupling. Journal of Machine Learning Research, 2004, 5: 975–1005
26 M L, Zhang Z H Zhou . A k-nearest neighbor based algorithm for multi-label classification. In: Proceedings of 2005 IEEE International Conference on Granular Computing. 2005, 718−721
27 L, Tang S, Rajan V K Narayanan . Large scale multi-label classification via metalabeler. In: Proceedings of the 18th International Conference on World Wide Web. 2009, 211−220
28 J H, Wu X, Wu Q G, Chen Y, Hu M L Zhang . Feature-induced manifold disambiguation for multi-view partial multi-label learning. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 557−565
29 B B, Jia M L Zhang . Multi-dimensional classification via selective feature augmentation. Machine Intelligence Research, 2022, 19( 1): 38–51
30 B B, Jia M L Zhang . Multi-dimensional classification via stacked dependency exploitation. Science China Information Sciences, 2020, 63( 12): 222102
31 Y, Zhang P, Zhao J, Cao W, Ma J, Huang Q, Wu M Tan . Online adaptive asymmetric active learning for budgeted imbalanced data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 2768−2777
32 J, Wang M L Zhang . Towards mitigating the class-imbalance problem for partial label learning. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 2427−2436
33 N G, Marchant B I P Rubinstein . Needle in a haystack: label-efficient evaluation under extreme class imbalance. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021, 1180−1190
34 B, Feng Y, Wang G, Li Y, Xie Y Ding . Palleon: a runtime system for efficient video processing toward dynamic class skew. In: Proceedings of 2021 USENIX Annual Technical Conference. 2021, 427−441
35 M, Buda A, Maki M A Mazurowski . A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 2018, 106: 249–259
36 Z, Liu Z, Miao X, Zhan J, Wang B, Gong S X Yu . Large-scale long-tailed recognition in an open world. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 2537−2546
37 L, Shen Z, Lin Q Huang . Relay backpropagation for effective learning of deep convolutional neural networks. In: Proceedings of the 14th European Conference on Computer Vision. 2016, 467−482
38 S, Wang W, Liu J, Wu L, Cao Q, Meng P J Kennedy . Training deep neural networks on imbalanced data sets. In: Proceedings of 2016 International Joint Conference on Neural Networks. 2016, 4368−4374
39 Y, Cui M, Jia T Y, Lin Y, Song S Belongie . Class-balanced loss based on effective number of samples. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 9268−9277
40 K, Cao C, Wei A, Gaidon N, Aréchiga T Ma . Learning imbalanced datasets with label-distribution-aware margin loss. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 140
41 H J, Ye H Y, Chen D C, Zhan W L Chao . Identifying and compensating for feature deviation in imbalanced deep learning. 2020, arXiv preprint arXiv: 2001.01385
42 J, Ren C, Yu S, Sheng X, Ma H, Zhao S, Yi H Li . Balanced meta-softmax for long-tailed visual recognition. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 351
43 Z H Zhou . Learnware: on the future of machine learning. Frontiers of Computer Science, 2016, 10( 4): 589–590
44 I, Kuzborskij F Orabona . Fast rates by transferring from auxiliary hypotheses. Machine Learning, 2017, 106( 2): 171–195
45 S S, Du J, Koushik A, Singh B Póczos . Hypothesis transfer learning via transformation functions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 574−584
46 X, Li Y, Grandvalet F Davoine . Explicit inductive bias for transfer learning with convolutional networks. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 2830−2839
47 S, Srinivas F Fleuret . Knowledge transfer with Jacobian matching. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 4730−4738
48 H J, Ye D C, Zhan Y, Jiang Z H Zhou . Rectify heterogeneous models with semantic mapping. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 1904−1913
49 H J, Ye D C, Zhan Y, Jiang Z H Zhou . Heterogeneous few-shot model rectification with semantic mapping. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43( 11): 3878–3891
50 G, Hinton O, Vinyals J Dean . Distilling the knowledge in a neural network. 2015, arXiv preprint arXiv: 1503.02531
51 M, Phuong C Lampert . Towards understanding knowledge distillation. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 5142−5151
52 Gotmare A, Keskar N S, Xiong C, Socher R. A closer look at deep learning heuristics: learning rate restarts, warmup and distillation. In: Proceedings of the 37th International Conference on Learning Representations. 2019
53 B, Heo J, Kim S, Yun H, Park N, Kwak J Y Choi . A comprehensive overhaul of feature distillation. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 1921−1930
54 J H, Cho B Hariharan . On the efficacy of knowledge distillation. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 4794−4802
55 B B, Sau V N Balasubramanian . Deep model compression: distilling knowledge from noisy teachers. 2016, arXiv preprint arXiv: 1610.09650
56 Q, Wang L, Zhan P, Thompson J Zhou . Multimodal learning with incomplete modalities by knowledge distillation. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 1828−1838
57 S, Liang M, Gong J, Pei L, Shou W, Zuo X, Zuo D Jiang . Reinforced iterative knowledge distillation for cross-lingual named entity recognition. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021, 3231−3239
58 W, Zhang Y, Jiang Y, Li Z, Sheng Y, Shen X, Miao L, Wang Z, Yang B Cui . ROD: reception-aware online distillation for sparse graphs. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021, 2232−2242
59 C, Xu Q, Li J, Ge J, Gao X, Yang C, Pei F, Sun J, Wu H, Sun W Ou . Privileged features distillation at Taobao recommendations. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 2590−2598
60 A, Yu K Grauman . Fine-grained visual comparisons with local learning. In: Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2014, 192−199
61 K, He X, Zhang S, Ren J Sun . Deep residual learning for image recognition. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770−778
62 C, Liu P, Zhao S J, Huang Y, Jiang Z H Zhou . Dual set multi-label learning. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018, 3635−3642
63 Y, Ge S, Abu-El-Haija G, Xin L Itti . Zero-shot synthesis with group-supervised learning. In: Proceedings of the 9th International Conference on Learning Representations. 2021
64 der Maaten L, van G Hinton . Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9( 86): 2579–2605
[1] FCS-23272-OF-YS_suppl_1 Download
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed