Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

邮发代号 80-970

2019 Impact Factor: 1.275

Frontiers of Computer Science  2024, Vol. 18 Issue (4): 184316   https://doi.org/10.1007/s11704-023-2441-1
  本期目录
Representation learning: serial-autoencoder for personalized recommendation
Yi ZHU1,2,3, Yishuai GENG1, Yun LI1(), Jipeng QIANG1, Xindong WU2,3
1. School of Information Engineering, Yangzhou University, Yangzhou 225127, China
2. Key Laboratory of Knowledge Engineering with Big Data (Ministry of Education of the People’s Republic of China), Hefei University of Technology, Hefei 230009, China
3. School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230009, China
 全文: PDF(14212 KB)   HTML
Abstract

Nowadays, the personalized recommendation has become a research hotspot for addressing information overload. Despite this, generating effective recommendations from sparse data remains a challenge. Recently, auxiliary information has been widely used to address data sparsity, but most models using auxiliary information are linear and have limited expressiveness. Due to the advantages of feature extraction and no-label requirements, autoencoder-based methods have become quite popular. However, most existing autoencoder-based methods discard the reconstruction of auxiliary information, which poses huge challenges for better representation learning and model scalability. To address these problems, we propose Serial-Autoencoder for Personalized Recommendation (SAPR), which aims to reduce the loss of critical information and enhance the learning of feature representations. Specifically, we first combine the original rating matrix and item attribute features and feed them into the first autoencoder for generating a higher-level representation of the input. Second, we use a second autoencoder to enhance the reconstruction of the data representation of the prediciton rating matrix. The output rating information is used for recommendation prediction. Extensive experiments on the MovieTweetings and MovieLens datasets have verified the effectiveness of SAPR compared to state-of-the-art models.

Key wordspersonalized recommendation    autoencoder    representation learning    collaborative filtering
收稿日期: 2022-07-10      出版日期: 2023-06-05
Corresponding Author(s): Yun LI   
 引用本文:   
. [J]. Frontiers of Computer Science, 2024, 18(4): 184316.
Yi ZHU, Yishuai GENG, Yun LI, Jipeng QIANG, Xindong WU. Representation learning: serial-autoencoder for personalized recommendation. Front. Comput. Sci., 2024, 18(4): 184316.
 链接本文:  
https://academic.hep.com.cn/fcs/CN/10.1007/s11704-023-2441-1
https://academic.hep.com.cn/fcs/CN/Y2024/V18/I4/184316
Fig.1  
Fig.2  
Notations Description
R The rating matrix
A The attributes vectors of all items
R1, R2 The prediction matrix RRn×m
n The number of users
m The number of items
ru The column of rating matrix
ri The row of rating matrix
k The dimension of item attribute vector
h, t The number of hidden units
xi The ith instance of original input
xi The reconstructed output of xi
sub(xi) The part of output of xi
ξ, ξ1, ξ2 The hidden feature representation matrix
W, W1,2, W, W1,2 The map and remap weight matrix
b, b, b1, b2 The map and remap bias vectors
f, g The nonlinear activation functions
? The product of vectors or matrices
Tab.1  
Fig.3  
Fig.4  
  
DataSets MT-10K ML-100K ML-1M
#Users 123 943 6,040
#Items 3,096 1,682 3,883
#Ratings 2,333 100,000 1,000,209
Sparsity% 0.58% 6.30% 4.26%
Item features Movie Title; Release Date; Genres
User features / Age; Gender; Occupation
Tab.2  
Metrics Methods Proportion of training data
MAE 50% 60% 70% 80% 90%
NMF 1.102 1.059 1.044 1.014 1.003
SVD++ 1.082 1.032 1.021 1.008 0.989
Wide&Deep 1.065 0.983 0.952 0.927 0.909
NCF 0.996 0.962 0.933 0.912 0.903
NFM 1.008 0.979 0.952 0.927 0.914
HCRSA 1.080 1.029 0.989 0.964 0.945
GraphRec 0.997 0.956 0.929 0.909 0.893
LightGCN 0.996 0.948 0.922 0.902 0.891
PRKG 1.018 0.976 0.941 0.911 0.896
SAPR 0.994 0.942 0.915 0.903 0.893
RMSE NMF 1.303 1.261 1.231 1.153 1.125
SVD++ 1.268 1.224 1.188 1.152 1.132
Wide&Deep 1.201 1.158 1.126 1.093 1.062
NCF 1.142 1.109 1.082 1.051 1.031
NFM 1.153 1.122 1.102 1.068 1.045
HCRSA 1.201 1.169 1.128 1.096 1.087
GraphRec 1.136 1.108 1.067 1.036 1.010
LightGCN 1.134 1.089 1.064 1.023 1.006
PRKG 1.167 1.126 1.072 1.034 1.012
SAPR 1.132 1.084 1.061 1.024 1.008
Tab.3  
Metrics Methods Proportion of training data
MAE 50% 60% 70% 80% 90%
NMF 0.769 0.765 0.761 0.758 0.755
SVD++ 0.752 0.747 0.741 0.726 0.722
Wide&Deep 0.721 0.718 0.715 0.712 0.708
NCF 0.717 0.711 0.704 0.699 0.693
NFM 0.718 0.709 0.705 0.701 0.697
HCRSA 0.727 0.724 0.713 0.711 0.703
GraphRec 0.721 0.714 0.709 0.703 0.701
LightGCN 0.719 0.711 0.705 0.696 0.686
MetaHIN 0.792 0.786 0.781 0.774 0.768
PRKG 0.729 0.723 0.714 0.704 0.698
SAPR 0.715 0.704 0.701 0.692 0.687
RMSE NMF 0.991 0.976 0.965 0.960 0.963
SVD++ 0.979 0.965 0.949 0.932 0.924
Wide&Deep 0.922 0.917 0.913 0.910 0.908
NCF 0.914 0.911 0.909 0.907 0.903
NFM 0.917 0.914 0.910 0.909 0.904
HCRSA 0.927 0.921 0.907 0.8905 0.897
GraphRec 0.919 0.908 0.899 0.891 0.887
LightGCN 0.916 0.903 0.897 0.885 0.873
MetaHIN 1.047 1.032 1.017 1.004 0.989
PRKG 0.928 0.917 0.913 0.899 0.895
SAPR 0.909 0.898 0.890 0.882 0.874
Tab.4  
Metrics Methods Proportion of training data
MAE 50% 60% 70% 80% 90%
NMF 0.735 0.727 0.718 0.711 0.708
SVD++ 0.683 0.678 0.674 0.668 0.666
Wide&Deep 0.702 0.697 0.693 0.694 0.689
NCF 0.696 0.691 0.686 0.683 0.677
NFM 0.693 0.688 0.686 0.682 0.679
HCRSA 0.692 0.687 0.681 0.675 0.668
GraphRec 0.683 0.679 0.673 0.668 0.664
LightGCN 0.682 0.676 0.671 0.666 0.659
MetaHIN 0.756 0.748 0.741 0.736 0.729
PRKG 0.705 0.696 0.690 0.684 0.679
SAPR 0.680 0.672 0.666 0.662 0.656
RMSE NMF 0.928 0.923 0.918 0.914 0.911
SVD++ 0.879 0.866 0.859 0.851 0.848
Wide&Deep 0.887 0.881 0.875 0.872 0.868
NCF 0.882 0.875 0.869 0.863 0.858
NFM 0.879 0.876 0.871 0.867 0.862
HCRSA 0.892 0.885 0.879 0.871 0.863
GraphRec 0.875 0.866 0.857 0.851 0.847
LightGCN 0.871 0.863 0.853 0.849 0.843
MetaHIN 0.983 0.968 0.957 0.945 0.932
PRKG 0.898 0.888 0.881 0.874 0.868
SAPR 0.867 0.858 0.851 0.847 0.839
Tab.5  
Fig.5  
Fig.6  
Fig.7  
Fig.8  
Fig.9  
Fig.10  
Fig.11  
  
  
  
  
  
1 Y, Geng Y, Zhu Y, Li X, Sun B Li . Multi-feature extension via semi-autoencoder for personalized recommendation. Applied Sciences, 2022, 12( 23): 12408
2 Y, Liu C, Liang F, Chiclana J Wu . A knowledge coverage-based trust propagation for recommendation mechanism in social network group decision making. Applied Soft Computing, 2021, 101: 107005
3 N W, Rahayu R, Ferdiana S S Kusumawardani . A systematic review of ontology use in E-Learning recommender system. Computers and Education: Artificial Intelligence, 2022, 3: 100047
4 D P D, Rajendran R P Sundarraj . Using topic models with browsing history in hybrid collaborative filtering recommender system: experiments with user ratings. International Journal of Information Management Data Insights, 2021, 1( 2): 100027
5 N, Ghasemi S Momtazi . Neural text similarity of user reviews for improving collaborative filtering recommender systems. Electronic Commerce Research and Applications, 2021, 45: 101019
6 F, Wang H, Zhu G, Srivastava S, Li M R, Khosravi L Qi . Robust collaborative filtering recommendation with user-item-trust records. IEEE Transactions on Computational Social Systems, 2022, 9( 4): 986–996
7 Y, Zhu L, Li X Wu . Stacked convolutional sparse auto-encoders for representation learning. ACM Transactions on Knowledge Discovery from Data, 2021, 15( 2): 31
8 Y, Zhu X, Wu J, Qiang Y, Yuan Y Li . Representation learning with collaborative autoencoder for personalized recommendation. Expert Systems with Applications, 2021, 186: 115825
9 M, Yu T, Quan Q, Peng X, Yu L Liu . A model-based collaborate filtering algorithm based on stacked AutoEncoder. Neural Computing and Applications, 2022, 34( 4): 2503–2511
10 H, Zhu Z, Qian Z, Ye D Zhang . An approach to rating prediction for personality recommendation via attention mechanism and denoising autoencoder. In: Proceedings of 2022 International Conference on Machine Learning, Cloud Computing and Intelligent Mining. 2022, 463−469
11 S, Wu F, Sun W, Zhang X, Xie B Cui . Graph neural networks in recommender systems: a survey. ACM Computing Surveys, 2023, 55( 5): 97
12 Y, Yan D, Cheng J E, Feng H, Li J Yue . Survey on applications of algebraic state space theory of logical systems to finite state machines. Science China Information Sciences, 2023, 66( 1): 111201
13 L, Zhang T, Luo F, Zhang Y Wu . A recommendation model based on deep neural network. IEEE Access, 2018, 6: 9454–9463
14 Hoyer P O. Non-negative matrix factorization with sparseness constraints. Journal of Machine Learning Research, 2004, 5(9): 1457−1469
15 Y Koren . Factorization meets the neighborhood: a multifaceted collaborative filtering model. In: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2008, 426−434
16 A, Rashed J, Grabocka L Schmidt-Thieme . Attribute-aware non-linear co-embeddings of graph features. In: Proceedings of the 13th ACM Conference on Recommender Systems. 2019, 314−321
17 X, He K, Deng X, Wang Y, Li Y, Zhang M Wang . LightGCN: simplifying and powering graph convolution network for recommendation. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2020, 639−648
18 Y, Lu Y, Fang C Shi . Meta-learning on heterogeneous information networks for cold-start recommendation. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 1563−1573
19 Z, Yu J, Lian A, Mahmoody G, Liu X Xie . Adaptive user modeling with long and short-term preferences for personalized recommendation. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 4213−4219
20 H T, Cheng L, Koc J, Harmsen T, Shaked T, Chandra H, Aradhye G, Anderson G, Corrado W, Chai M, Ispir R, Anil Z, Haque L, Hong V, Jain X, Liu H Shah . Wide & deep learning for recommender systems. In: Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. 2016, 7−10
21 X, He L, Liao H, Zhang L, Nie X, Hu T S Chua . Neural collaborative filtering. In: Proceedings of the 26th International Conference on World Wide Web. 2017, 173−182
22 X, He T S Chua . Neural factorization machines for sparse predictive analytics. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2017, 355−364
23 R Mu . A survey of recommender systems based on deep learning. IEEE Access, 2018, 6: 69009–69022
24 S, Yang Y, Wang X Chu . A survey of deep learning techniques for neural machine translation. 2020, arXiv preprint arXiv: 2002.07526
25 A S, Subramanian C, Weng S, Watanabe M, Yu D Yu . Deep learning based multi-source localization with source splitting and its effectiveness in multi-talker speech recognition. Computer Speech & Language, 2022, 75: 101360
26 Y, Zhu Q, Lin H, Lu K, Shi P, Qiu Z Niu . Recommending scientific paper via heterogeneous knowledge embedding based attentive recurrent neural networks. Knowledge-Based Systems, 2021, 215: 106744
27 P M, Alamdari N J, Navimipour M, Hosseinzadeh A A, Safaei A Darwesh . Image-based product recommendation method for E-commerce applications using convolutional neural networks. Acta Informatica Pragensia, 2022, 11( 1): 15–35
28 H, Tahmasebi R, Ravanmehr R Mohamadrezaei . Social movie recommender system based on deep autoencoder network using Twitter data. Neural Computing and Applications, 2021, 33( 5): 1607–1623
29 B, Askari J, Szlichta A Salehi-Abari . Variational autoencoders for Top-K recommendation with implicit feedback. In: Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021, 2061−2065
30 Y, Zhu Z Chen . Mutually-regularized dual collaborative variational auto-encoder for recommendation systems. In: Proceedings of the ACM Web Conference 2022. 2022, 2379−2387
31 S, Zhang L, Yao X, Xu S, Wang L Zhu . Hybrid collaborative recommendation via semi-AutoEncoder. In: Proceedings of the 24th International Conference on Neural Information Processing. 2017, 185−193
32 Y, Yang Y, Zhu Y Li . Personalized recommendation with knowledge graph via dual-autoencoder. Applied Intelligence, 2022, 52( 6): 6196–6207
33 S, Nurmaini A, Darmawahyuni A N S, Mukti M N, Rachmatullah F, Firdaus B Tutuko . Deep learning-based stacked denoising and autoencoder for ECG heartbeat classification. Electronics, 2020, 9( 1): 135
34 G, Zhang Y, Liu X Jin . A survey of autoencoder-based recommender systems. Frontiers of Computer Science, 2020, 14( 2): 430–450
35 Z, Xie C, Liu Y, Zhang H, Lu D, Wang Y Ding . Adversarial and contrastive variational autoencoder for sequential recommendation. In: Proceedings of the Web Conference 2021. 2021, 449−459
36 D, Jana J, Patil S, Herkal S, Nagarajaiah L Duenas-Osorio . CNN and Convolutional Autoencoder (CAE) based real-time sensor fault detection, localization, and correction. Mechanical Systems and Signal Processing, 2022, 169: 108723
37 Y, Zhu B, Dong Z Sha . Personalized recommendation based on entity attributes and graph features. In: Proceedings of 2021 IEEE International Conference on Big Knowledge. 2021, 7−14
38 Y, Geng X, Xiao X, Sun Y Zhu . Representation learning: Recommendation with knowledge graph via triple-autoencoder. Frontiers in Genetics, 2022, 13: 891265
39 S, Dooms Pessemier T, De L Martens . MovieTweetings: a movie rating dataset collected from twitter. In: Proceedings of the Workshop on Crowdsourcing and Human Computation for Recommender Systems, Held in Conjunction with the 7th ACM Conference on Recommender Systems. 2013, 43
40 J, Lee M, Sun G Lebanon . PREA: personalized recommendation algorithms toolkit. The Journal of Machine Learning Research, 2012, 13( 1): 2699–2703
[1] FCS-22441-OF-YZ_suppl_1 Download
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed