Unsupervised transfer subspace learning is one of the challenging and important topics in domain adaptation, which aims to classify unlabeled target data by using source domain information. The traditional transfer subspace learning methods often impose low-rank constraints, i.e., trace norm, to preserve data structural information of different domains. However, trace norm is only the convex surrogate to approximate the ideal low-rank constraints and may make their solutions seriously deviate from the original optimums. In addition, the traditional methods directly use the strict labels of source domain, which is difficult to deal with label noise. To solve these problems, we propose a novel nonconvex and discriminative transfer subspace learning method named NDTSL by incorporating Schatten- norm and soft label matrix. Specifically, Schatten- norm can be imposed to approximate the low-rank constraints and obtain a better low-rank representation. Then, we design and adopt soft label matrix in source domain to learn a more flexible classifier and enhance the discriminative ability of target data. Besides, due to the nonconvexity of Schatten- norm, we design an efficient alternative algorithm IALM to solve it. Finally, experimental results on several public transfer tasks demonstrate the effectiveness of NDTSL compared with several state-of-the-art methods.
A Margolis . A literature review of domain adaptation with unlabeled data. Washington: University of Washington, 2011, 1−42
2
You K, Long M, Cao Z, Wang J, Jordan M I. Universal domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 2715−2724
3
W M, Kouw M Loog . A review of domain adaptation without target labels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43( 3): 766–785
4
A, Farahani S, Voghoei K, Rasheed H R Arabnia . A brief review of domain adaptation. In: Stahlbock R, Weiss G M, Abou-Nasr M, Yang C Y, Arabnia H R, Deligiannidis L, eds. Advances in Data Science and Information Engineering. Cham: Springer, 2021, 877−894
5
V M, Patel R, Gopalan R, Li R Chellappa . Visual domain adaptation: a survey of recent advances. IEEE Signal Processing Magazine, 2015, 32( 3): 53–69
6
G Csurka . Domain Adaptation in Computer Vision Applications. Cham: Springer, 2017
7
J Jiang . Domain adaptation in natural language processing. University of Illinois at Urbana-Champaign, Dissertation, 2008
8
C S, Perone P, Ballester R C, Barros J Cohen-Adad . Unsupervised domain adaptation for medical imaging segmentation with self-ensembling. NeuroImage, 2019, 194: 1–11
9
Y, Zhang Y, Wei Q, Wu P, Zhao S, Niu J, Huang M Tan . Collaborative unsupervised domain adaptation for medical image diagnosis. IEEE Transactions on Image Processing, 2020, 29: 7834–7844
10
H, Guan M Liu . Domain adaptation for medical image analysis: a survey. IEEE Transactions on Biomedical Engineering, 2022, 69( 3): 1173–1185
11
S J, Pan I W, Tsang J T, Kwok Q Yang . Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 2011, 22( 2): 199–210
12
M, Long J, Wang G, Ding J, Sun P S Yu . Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision. 2013, 2200−2207
13
J, Wang Y, Chen S, Hao W, Feng Z Shen . Balanced distribution adaptation for transfer learning. In: Proceedings of the IEEE International Conference on Data Mining. 2017, 1129−1134
14
Zhang W, Wu D. Discriminative joint probability maximum mean discrepancy (DJP-MMD) for domain adaptation. In: Proceedings of IEEE International Joint Conference on Neural Networks. 2020, 1−8
15
W, Wang H, Li Z, Ding F, Nie J, Chen X, Dong Z Wang . Rethinking maximum mean discrepancy for visual domain adaptation. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34( 1): 264–277
16
A, Gretton K M, Borgwardt M J, Rasch B, Schölkopf A Smola . A kernel two-sample test. The Journal of Machine Learning Research, 2012, 13: 723–773
17
Fernando B, Habrard A, Sebban M, Tuytelaars T. Unsupervised visual domain adaptation using subspace alignment. In: Proceedings of IEEE International Conference on Computer Vision. 2013, 2960−2967
18
B, Sun K Saenko . Subspace distribution alignment for unsupervised domain adaptation. In: Proceedings of the British Machine Vision Conference. 2015, 24.1−24.10
19
Sun B, Feng J, Saenko K. Return of frustratingly easy domain adaptation. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. 2016, 2058−2065
20
Gopalan R, Li R, Chellappa R. Domain adaptation for object recognition: an unsupervised approach. In: Proceedings of IEEE International Conference on Computer Vision. 2011, 999−1006
21
Gong B, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2066−2073
22
M, Shao D, Kit Y Fu . Generalized transfer subspace learning through low-rank constraint. International Journal of Computer Vision, 2014, 109( 1−2): 74–93
23
Y, Xu X, Fang J, Wu X, Li D Zhang . Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Transactions on Image Processing, 2016, 25( 2): 850–863
24
Li J, Zhao J, Lu K. Joint feature selection and structure preservation for domain adaptation. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence. 2016, 1697−1703
25
Z, Lin Z, Zhao T, Luo W, Yang Y, Zhang Y Tang . Non-convex transfer subspace learning for unsupervised domain adaptation. In: Proceedings of the IEEE International Conference on Multimedia and Expo. 2019, 1468−1473
26
L, Yang Q Zhou . Transfer subspace learning joint low-rank representation and feature selection. Multimedia Tools and Applications, 2022, 81( 27): 38353–38373
27
W, Li S Chen . Unsupervised domain adaptation with progressive adaptation of subspaces. Pattern Recognition, 2022, 132: 108918
28
P, Razzaghi P, Razzaghi K Abbasi . Transfer subspace learning via low-rank and discriminative reconstruction matrix. Knowledge-Based Systems, 2019, 163: 174–185
29
T, Xiao P, Liu W, Zhao H, Liu X Tang . Structure preservation and distribution alignment in discriminative transfer subspace learning. Neurocomputing, 2019, 337: 218–234
30
H, Xia T, Jing Z Ding . Maximum structural generation discrepancy for unsupervised domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45( 3): 3434–3445
31
Y, Madadi V, Seydi R Hosseini . Multi-source domain adaptation-based low-rank representation and correlation alignment. International Journal of Computers and Applications, 2022, 44( 7): 670–677
32
L, Yang B, Lu Q, Zhou P Su . Unsupervised domain adaptation via re-weighted transfer subspace learning with inter-class sparsity. Knowledge-Based Systems, 2023, 263: 110277
33
G, Liu Z, Lin S, Yan J, Sun Y, Yu Y Ma . Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35( 1): 171–184
34
Fazel M, Hindi H, Boyd S P. A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of IEEE American Control Conference. 2001, 4734−4739
35
X, Fang Y, Xu X, Li Z, Lai W K, Wong B Fang . Regularized label relaxation linear regression. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29( 4): 1006–1018
36
Y, Wang W, Yin J Zeng . Global convergence of ADMM in nonconvex nonsmooth optimization. Journal of Scientific Computing, 2019, 78( 1): 29–63
37
Nie F, Wang H, Cai X, Huang H, Ding C. Robust matrix completion via joint schatten p-Norm and lp-norm minimization. In: Proceedings of the 12th IEEE International Conference on Data Mining. 2012, 566−574
38
Z, Lin M, Chen L, Wu Y Ma . The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. Urbana: Coordinated Science Laboratory, 2009
39
K, Saenko B, Kulis M, Fritz T Darrell . Adapting visual category models to new domains. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 213−226
40
G, Griffin A, Holub P Perona . Caltech-256 object category dataset. Pasadena: California Institute of Technology, 2007
41
M, Everingham Gool L, Van C K I, Williams J, Winn A Zisserman . The PASCAL visual object classes (VOC) challenge. International Journal of Computer Vision, 2010, 88( 2): 303–338
42
B C, Russell A, Torralba K P, Murphy W T Freeman . LabelMe: a database and web-based tool for image annotation. International Journal of Computer Vision, 2008, 77( 1−3): 157–173
43
Choi M J, Lim J J, Torralba A, Willsky A S. Exploiting hierarchical context on a large database of object categories. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2010, 129−136
44
H, Bay T, Tuytelaars Gool L Van . SURF: speeded up robust features. In: Proceedings of the 9th European Conference on Computer Vision. 2006, 404−417
45
J, Donahue Y, Jia O, Vinyals J, Hoffman N, Zhang E, Tzeng T Darrell . DeCAF: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning. 2014, I-647−I-655