Identifying useful learnwares via learnable specification
Zhi-Yu SHEN1, Ming LI1,2()
. National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China . School of Artificial Intelligence, Nanjing University, Nanjing 210023, China
The learnware paradigm has been proposed as a new manner for reusing models from a market of various well-trained models, which can relieve users’ burden of training a new model from scratch. A learnware consists of a well-trained model and a specification which explains the purpose or specialty of the model without revealing data. By specification matching, the market can identify the most useful learnwares for users’ tasks. Prior art attempted to generate the specification by a reduced kernel mean embedding approach. However, such kind of specification is defined by some pre-designed kernel function, which lacks flexibility. In this paper, we advance a methodology for direct specification learning from data, introducing a novel neural network named SpecNet for this purpose. Our approach accepts unordered datasets as input and subsequently produces specification vectors in a latent space. Notably, the flexibility and efficiency of our learned specifications are underscored by their derivation from diverse tasks, rendering them particularly adept for learnware identification. Empirical studies provide validation for the efficacy of our proposed approach.
Fig.1 The learnware market and the involved two stages
Fig.2 Difference between our approach and kernel-based approach. In this example, the truth is that is more similar to than , so model is more suitable to be reused for than . However, in kernel-based specification assigning, the distance metric is fixed and not all the kernels can provide satisfied results. Our approach can help to learn a desired metric distance:
Fig.3 The overall architecture of the proposed SpecNet. Firstly, the specification generator takes each whole dataset as an input, and outputs a vector . Subsequently, the generated specification vectors are inputted into the classifier for task indicator prediction. Simultaneously, the triplet ranking loss is employed to gauge the similarity between distinct tasks. In our approach, we make the specification learnable by backpropagation. The detailed structure of the specification generator is shown in the lower dashed box. Especially, an input “instance” of specification generator is an unordered set of vectors
Method
Market set size
200
400
600
800
1000
Random
15.62
14.93
15.27
14.69
12.98
RKME
40.83
44.65
45.37
47.70
50.24
SpecNet
73.24
74.37
75.48
75.63
76.05
Best
77.88
79.32
79.85
80.11
80.36
Tab.1 Average accuracy (%) of reused model on users’ tasks (Tiny-ImageNet). The best results are in bold. The market set size is ranging from 200 to 1000
Method
Market set size
200
400
600
800
1000
Random
17.19
16.11
17.10
15.59
14.08
RKME
41.23
42.43
44.74
45.47
47.96
SpecNet
78.64
78.75
80.20
81.38
81.79
Best
81.10
82.19
83.54
83.71
84.92
Tab.2 Average accuracy (%) of reused model on users’ tasks (CIFAR-10). The best results are in bold. The market set size is ranging from 200 to 1000
Fig.4 Top- hit of the retrieved models on users’ tasks of Tiny-ImageNet datasets. The higher the metric value, the better the performance. It can be observed that our approach could achieve the highest top- hit rate in all settings. (a) Top-k hit rate (market size=200); (b) Top-k hit rate (market size=400); (c) Top-k hit rate (market size=600); (d) Top-k hit rate (market size=800); (e) Top-k hit rate (market size=1000)
Fig.5 Top- hit of the retrieved models on users’ tasks of CIFAR-10 datasets. The higher the metric value, the better the performance. It can be observed that our approach could achieve the highest top- hit rate in all settings. (a) Top-k hit rate (market size=200); (b) Top-k hit rate (market size=400); (c) Top-k hit rate (market size=600); (d) Top-k hit rate (market size=800); (e)Top-k hit rate (market size=1000)
Senario
In-domain
Out-of-domain
# of unknown classes
method
10
20
30
40
Random
6.9
6.1
6.6
7.0
8.2
RKME
46.4
45.7
48.5
45.3
37.6
SpecNet
60.5
62.3
61.7
62.9
53.3
Tab.3 Average reuse accuracy (%) of two senarios. The label space of the pre-train set () and the label space of the market set () are disjoint. The best results are in bold
1
Z H Zhou . Learnware: on the future of machine learning. Frontiers of Computer Science, 2016, 10( 4): 589–590
2
P, Tan Z H, Tan Y, Jiang Z H Zhou . Towards enabling learnware to handle heterogeneous feature spaces. Machine Learning, 2024, 113( 4): 1839–1860
3
Y J, Zhang Y H, Yan P, Zhao Z H Zhou . Towards enabling learnware to handle unseen jobs. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 10964−10972
4
L Z, Guo Z, Zhou Y F, Li Z H Zhou . Identifying useful learnwares for heterogeneous label spaces. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 12122−12131
5
J, Hoffman E, Tzeng T, Park J Y, Zhu P, Isola K, Saenko A, Efros T Darrell . CyCADA: cycle-consistent adversarial domain adaptation. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 1994−2003
6
S, Ben-David J, Blitzer K, Crammer A, Kulesza F, Pereira J W Vaughan . A theory of learning from different domains. Machine Learning, 2010, 79(1−2): 151−175
7
Y, Ganin V Lempitsky . Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning. 2015, 1180−1189
8
A, Kumar T, Ma P Liang . Understanding self-training for gradual domain adaptation. In: Proceedings of the 37th International Conference on Machine Learning. 2020, 507
9
Y, Luo Z, Wang Z, Huang M Baktashmotlagh . Progressive graph learning for open-set domain adaptation. In: Proceedings of the 37th International Conference on Machine Learning. 2020, 600
10
X Z, Wu W, Xu S, Liu Z H Zhou . Model reuse with reduced kernel mean embedding specification. IEEE Transactions on Knowledge and Data Engineering, 2023, 35( 1): 699–710
11
A, Smola A, Gretton L, Song B Schölkopf . A hilbert space embedding for distributions. In: Proceedings of the 18th International Conference on Algorithmic Learning Theory. 2007, 13−31
12
Y X, Ding Z H Zhou . Boosting-based reliable model reuse. In: Proceedings of the 12th Asian Conference on Machine Learning. 2020, 145−160
13
S, Ben-David J, Blitzer K, Crammer F Pereira . Analysis of representations for domain adaptation. In: Proceedings of the 19th International Conference on Neural Information Processing Systems. 2006, 137−144
14
P, Germain A, Habrard F, Laviolette E Morvant . A new PAC-Bayesian perspective on domain adaptation. In: Proceedings of the 33rd International Conference on Machine Learning. 2016, 859−868
15
M Y, Liu O Tuzel . Coupled generative adversarial networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 469−477
16
W, Chang Y, Shi H D, Tuan J Wang . Unified optimal transport framework for universal domain adaptation. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2140
17
Y, Zhu X, Wu J, Qiang Y, Yuan Y Li . Representation learning via an integrated autoencoder for unsupervised domain adaptation. Frontiers of Computer Science, 2023, 17( 5): 175334
18
S J, Pan Q Yang . A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22( 10): 1345–1359
19
Y X, Ding X Z, Wu K, Zhou Z H Zhou . Pre-trained model reusability evaluation for small-data transfer learning. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2710
20
B, Wang J A, Mendez M B, Cai E Eaton . Transfer learning via minimizing the performance gap between domains. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 955
21
N, Houlsby A, Giurgiu S, Jastrzebski B, Morrone Laroussilhe Q, De A, Gesmundo M, Attariyan S Gelly . Parameter-efficient transfer learning for NLP. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 2790−2799
22
Y, Yang J, Guo G, Li L, Li W, Li J Yang . Alignment efficient image-sentence retrieval considering transferable cross-modal representation learning. Frontiers of Computer Science, 2024, 18( 1): 181335
23
Z, Wang Z, Dai B, Póczos J Carbonell . Characterizing and avoiding negative transfer. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 11293−11302
24
D, Maturana S Scherer . VoxNet: a 3D convolutional neural network for real-time object recognition. In: Proceedings of 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. 2015, 922−928
25
C R, Qi H, Su M, Nießner A, Dai M, Yan L J Guibas . Volumetric and multi-view CNNs for object classification on 3D data. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 5648−5656
26
Z, Wu S, Song A, Khosla F, Yu L, Zhang X, Tang J Xiao . 3D ShapeNets: a deep representation for volumetric shapes. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. 2015, 1912−1920
27
O, Vinyals S, Bengio M Kudlur . Order matters: sequence to sequence for sets. In: Proceedings of the 4th International Conference on Learning Representations. 2016
28
Charles R, Qi H, Su M, Kaichun L J Guibas . PointNet: deep learning on point sets for 3D classification and segmentation. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 77−85
29
Y, Le X Yang . Tiny ImageNet visual recognition challenge. In: Proceedings of the Stanford CS231N Convolutional Neural Networks for Visual Recognition. 2015
30
A Krizhevsky . Learning multiple layers of features from tiny images. Technical Report TR-2009. Toronto: University of Toronto, 2009