Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2022, Vol. 16 Issue (4) : 164322    https://doi.org/10.1007/s11704-021-0478-6
RESEARCH ARTICLE
Disclosing incoherent sparse and low-rank patterns inside homologous GPCR tasks for better modelling of ligand bioactivities
Jiansheng WU1,2(), Chuangchuang LAN1,2, Xuelin YE3, Jiale DENG4, Wanqing HUANG5, Xueni YANG1, Yanxiang ZHU6, Haifeng HU5
1. School of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
2. Smart Health Big Data Analysis and Location Services Engineering Lab of Jiangsu Province, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
3. Department of Statistics, University of Warwick, Coventry CV47AL, United Kingdom
4. Modern Economics & Management College, Jiangxi University of Finance and Economics, Nanchang 330013, China
5. School of Telecommunication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
6. Verimake Research, Nanjing Qujike Info-tech Co., Ltd., Nanjing 210088, China
 Download: PDF(15734 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

There are many new and potential drug targets in G protein-coupled receptors (GPCRs) without sufficient ligand associations, and accurately predicting and interpreting ligand bioactivities is vital for screening and optimizing hit compounds targeting these GPCRs. To efficiently address the lack of labeled training samples, we proposed a multi-task regression learning with incoherent sparse and low-rank patterns (MTR-ISLR) to model ligand bioactivities and identify their key substructures associated with these GPCRs targets. That is, MTR-ISLR intends to enhance the performance and interpretability of models under a small size of available training data by introducing homologous GPCR tasks. Meanwhile, the low-rank constraint term encourages to catch the underlying relationship among homologous GPCR tasks for greater model generalization, and the entry-wise sparse regularization term ensures to recognize essential discriminative substructures from each task for explanative modeling. We examined MTR-ISLR on a set of 31 important human GPCRs datasets from 9 subfamilies, each with less than 400 ligand associations. The results show that MTR-ISLR reaches better performance when compared with traditional single-task learning, deep multi-task learning and multi-task learning with joint feature learning-based models on most cases, where MTR-ISLR obtains an average improvement of 7% in correlation coefficient (r2) and 12% in root mean square error (RMSE) against the runner-up predictors. The MTR-ISLR web server appends freely all source codes and data for academic usages.

Keywords G protein-coupled receptors (GPCRs)      ligand bioactivities      multi-task learning      incoherent sparse and low-rank patterns     
Corresponding Author(s): Jiansheng WU   
Just Accepted Date: 26 January 2021   Issue Date: 07 December 2021
 Cite this article:   
Jiansheng WU,Chuangchuang LAN,Xuelin YE, et al. Disclosing incoherent sparse and low-rank patterns inside homologous GPCR tasks for better modelling of ligand bioactivities[J]. Front. Comput. Sci., 2022, 16(4): 164322.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-021-0478-6
https://academic.hep.com.cn/fcs/EN/Y2022/V16/I4/164322
Fig.1  Schematic of MTR-ISLR, where P denotes the sparse component with the zero-value entries represented by white blocks, and Q denotes the low-rank component
group dataset single-task learning deep learning multi-task learning with joint features learning
SVR GBDT RF Lasso RR DNN DC MTLa cFSGL MTR-GL MTR-ISLR
A A1 0.2321 0.2000 0.1886 0.5558 0.5200 0.3454 0.3548 0.5941 0.6190 0.5675 0.6376
A2 0.3736 0.6042 0.6421 0.6521 0.6703 0.4821 0.5017 0.7265 0.6589 0.7024 0.7561
A3 0.4121 0.5183 0.4641 0.5662 0.5450 0.6423 0.6336 0.6378 0.6668 0.6070 0.6960
B B1 0.4289 0.5875 0.6605 0.7673 0.7555 0.6928 0.6869 0.7865 0.7339 0.7711 0.8272
B2 0.5620 0.5548 0.6145 0.6858* 0.6246 0.6111 0.6084 0.6608 0.6497 0.6336 0.6582
C C1 0.5238 0.6582 0.6388 0.7296 0.6917 0.4213 0.4427 0.7147 0.7392 0.7162 0.7415
C2 0.3532 0.5243 0.6304 0.5877 0.6071 0.5302 0.5185 0.6301 0.6005 0.6099 0.6953
C3 0.3997 0.4520 0.4965 0.5608 0.6365 0.2949 0.3097 0.6202 0.4622 0.6512 0.7384
C4 0.5890 0.6253 0.6081 0.5689 0.6218 0.5751 0.5579 0.6329 0.6593 0.6489 0.6847
D D1 0.6682 0.8571 0.8233 0.8169 0.7943 0.7876 0.8025 0.8324 0.8289 0.8478 0.8460
D2 0.3047 0.3967 0.5966 0.5724 0.6058 0.6573 0.6488 0.5821 0.5914 0.6259 0.7901
D3 0.3473 0.3686 0.4866 0.5486 0.5729 0.4919 0.5013 0.5554 0.5530 0.5602 0.5839
E E1 0.7541* 0.5508 0.6711 0.6539 0.6899 0.6374 0.6268 0.6393 0.6646 0.6625 0.6954
E2 0.4218 0.4375 0.4894 0.6933 0.6753 0.4666 0.4582 0.6831 0.5749 0.6794 0.7113
E3 0.8165 0.9525 0.9461 0.9251 0.9408 0.2428 0.2784 0.9584 0.9486 0.9574 0.9427
E4 0.2372 0.4268 0.4228 0.5024 0.5042 0.4277 0.4186 0.4935 0.5174 0.4846 0.5263
F F1 0.7454 0.7478 0.7662 0.8051 0.8363 0.2642 0.2559 0.8176 0.8098 0.8303 0.8511
F2 0.5245 0.8558 0.8605 0.8685 0.8810 0.6927 0.6873 0.9157 0.9585* 0.9203 0.9278
G G1 0.3041 0.3616 0.3618 0.5095 0.5954 0.2827 0.2971 0.6102 0.6699 0.6536 0.6924
G2 0.2242 0.2295 0.4237 0.7314 0.7071 0.4412 0.4339 0.7276 0.6971 0.7054 0.7597
H H1 0.2268 0.3491 0.4716 0.7168 0.7288 0.3882 0.3718 0.6858 0.6320 0.6757 0.7534
H2 0.1536 0.1903 0.2746 0.3608 0.4054 0.2033 0.2136 0.5512 0.3538 0.6061 0.6834
H3 0.3269 0.3096 0.2301 0.4508 0.4426 0.2044 0.2158 0.5069 0.5966 0.5284 0.6018
H4 0.4836 0.4422 0.4910 0.4718 0.4760 0.6036 0.6147 0.5449 0.5039 0.5664 0.6219
win/tie/loss 1/0/30 0/2/29 0/2/29 1/2/28 0/3/28 0/1/30 0/1/30 0/7/24 1/8/22 0/6/25
Tab.1  Comparison with other methods on r2
group dataset single-task learning deep multi-task learning multi-task learning with joint features learning
SVR GBDT RF Lasso RR DNN DC MTLa cFSGL MTR-GL MTR-ISLR
A A1 0.8033 0.8820 0.8726 0.6903 0.7549 0.8102 0.8217 0.7121 0.7254 0.8301 0.6729
A2 0.8183 0.6431* 0.6116* 0.8800 0.8506 0.7555 0.7468 0.8239 0.6694 0.7033 0.6932
A3 0.8047 0.7296 0.7653 0.9920 1.1068 0.6956 0.7052 0.7259 0.6924 0.7025 0.6902
B B1 0.9227 0.7871 0.7086* 0.6708* 0.6457* 0.6494* 0.6581* 0.7461 0.7158* 0.7573 0.7678
B2 0.5975* 0.6428* 0.6559 0.6077* 0.6332* 0.6433* 0.6522 0.6838 0.7571 0.6778 0.6731
C C1 1.0959 0.9340 0.9821 0.9893 0.9888 1.1335 1.1476 1.3111 0.8650 1.0554 0.8651
C2 0.7427 0.6457 0.5672* 0.7760 0.8541 0.6156* 0.6234* 0.7219 0.7756 0.7237 0.6638
C3 0.7238 0.7130 0.736 0.8205 0.7956 0.7506 0.7628 0.7633 0.7376 0.8054 0.7143
C4 0.6534 0.5993 0.6159 0.8946 0.8380 0.6512 0.6648 0.6121 0.6091 0.7721 0.5824
D D1 1.0973 0.7646 0.9216 0.7202 0.6974 0.7992 0.8002 0.9837 0.6869 0.8522 0.5230
D2 0.815 0.6735 0.6408 0.8674 0.8496 0.7619 0.7718 0.7673 0.6829 0.6844 0.5690
D3 0.8614 0.8854 0.7613 0.7464 0.7711 0.8721 0.8628 0.7591 0.7240 0.7552 0.7143
E E1 0.7334* 0.9036 0.7810* 0.6888* 0.6770* 0.7386* 0.7428* 0.8588 0.9085 0.8755 0.8287
E2 0.8369 0.9068 0.8047 0.8240 0.8261 0.8560 0.8663 0.8334 0.8788 0.8702 0.82
E3 0.5632 0.1929* 0.2048* 0.6153 0.7629 0.1867* 0.2389* 0.5863 0.2363* 0.4494 0.35
E4 0.8179 0.7408 0.7410 0.7399 0.7861 0.7485 0.7516 0.7631 0.7994 0.7916 0.73
F F1 0.4635 0.4571 0.4428 0.5060 0.5295 0.5308 0.5417 0.5519 0.4987 0.5103 0.4428
F2 0.5354 0.2869 0.2664 0.3907 0.4067 0.5435 0.5553 0.3995 0.4351 0.4147 0.2502
G G1 0.5519 0.5612 0.5618 0.6564 0.6107 0.6365 0.6428 0.5617 0.4878 0.5720 0.4719
G2 0.4441 0.5291 0.4750 0.4291 0.4299 0.4389 0.4441 0.4313 0.3559* 0.4392 0.4222
H H1 0.9458 0.9780 0.8076 1.0708 1.0165 0.9140 0.9019 0.8953 0.7598* 0.8548 0.8153
H2 0.7305 0.8048 0.7324 0.8021 0.7698 0.7671 0.7721 0.7172 0.7269 0.7352 0.7166
H3 0.6311 0.6463 0.7035 0.7473 0.6765 0.8091 0.8162 0.7383 0.5436* 0.7669 0.6282
H4 0.8578 0.9255 0.8757 0.8503 0.9160 0.8795 0.8845 0.9385 0.8549 0.8428 0.8204
win/tie/loss 2/4/25 3/4/24 5/4/22 3/4/24 3/1/27 5/0/26 4/0/27 0/6/25 5/3/23 0/6/25
Tab.2  Comparison with other methods on RMSE
Criterion Dataset Single-task learning Deep learning Multi-task learning with joint features learning
SVR GBDT RF Lasso RR DNN DC MTLa cFSGL MTR-GL MTR-ISLR
r2 I1 0.1218 0.6532 0.6481 0.6803 0.6733 0.0498 0.0673 0.6776 0.7058 0.6985 0.7534
I2 0.2314 0.2598 0.3673 0.5877* 0.6219* 0.2666 0.2719 0.5684 0.2 0.5842 0.5631
I3 0.0716 0.1044 0.1643 0.483 0.5279 0.2805 0.2786 0.4815 0.367 0.4188 0.5464
I4 0.2024 0.3088 0.3053 0.4733* 0.5164* 0.4024 0.3999 0.479* 0.446 0.4872* 0.4242
I5 0.0984 0.2107 0.3803 0.5279 0.5032 0.2308 0.241 0.5401 0.5548 0.555 0.5656
I6 0.3807 0.641 0.6239 0.6684 0.6683 0.4026 0.4112 0.7259 0.4613 0.7189 0.7368
I7 0.2058 0.2379 0.2248 0.5173 0.5029 0.1498 0.1558 0.5385 0.3263 0.5317 0.5428
RMSE I1 0.7689 0.6943 0.718 0.6943 0.7326 0.6918 0.7126 0.7725 0.5629 0.7431 0.4784
I2 0.2088 0.1703 0.1482* 0.2003 0.1355* 0.1592* 0.1678* 0.2045 0.1041* 0.1216* 0.1995
I3 1.4374 1.1761 1.0817 0.7775 0.7899 0.7517 0.7438 0.798 0.6506 0.7664 0.6398
I4 0.7869 0.8538 0.7502 0.6198* 0.6232* 0.6707 0.6813 0.6585 0.7405 0.7434 0.6779
I5 1.0773 1.0481 0.8726 0.8784 0.8454 1.0028 0.9932 0.8595 0.969 0.8314 0.7704
I6 0.6627* 0.4893* 0.5002* 0.6964 0.667* 0.7581 0.767 0.7299 0.7871 0.7162 0.7001
I7 2.1692 2.3244 1.9635 1.9059 1.8577 1.9745 1.9548 1.6819 1.9946 1.6413 1.451
Tab.3  Performance on orphan GPCR tasks
Fig.2  Influence of the number of training samples. “100”, “200”, “300” and “All” respectively mean that 100, 200, 300 and all ligands are selected as training samples of MTR-ISLR; “M100”, “M200” “M300” and “M-all” respectively mean that 100, 200, 300 and all ligands are selected as training samples in MTR-GL; A: P34969, B: P21453, C: Q9H244, D: P32247, E: P51681, F: Q8TDS4, G: P47871, H: P41594, I: Q9NQS5
group dataset r2 (↑) RMSE(↓)
PCC Lasso RR MTLa cFSGL MTR-GL MTR-ISLR PCC Lasso RR MTLa cFSGL MTR-GL MTR-ISLR
A A1 0.2275 0.2562 0.2285 0.342 0.3287 0.3369 0.3688 0.8451 0.7602 0.8622 0.7795 0.7357 0.7668 0.7423
A2 0.4939 0.4617 0.45 0.5485 0.5761 0.5837 0.5931 0.6367 0.6353 0.6578 0.6795 0.6608 0.6564 0.6322
A3 0.4049 0.4811 0.4707 0.5049 0.4906 0.5469 0.5774 0.8124 0.7417 0.7675 0.7906 0.7418 0.7999 0.7395
B B1 0.6703 0.7159 0.7205 0.6882 0.6366 0.6934 0.7314 0.6581 0.6495 0.647 0.6796 0.7335 0.6744 0.6383
B2 0.3859 0.4905 0.5063 0.5847 0.5037 0.5897 0.5949 0.7243 0.5612 0.6038 0.5913 0.6511 0.5844 0.5545
C C1 0.462 0.4387 0.4306 0.4837 0.4658 0.5067 0.5538 0.9865 0.87 0.9251 0.9124 0.9681 0.8895 1.0837
C2 0.5149 0.4943 0.4741 0.4912 0.5146 0.5219 0.4919 0.6273 0.5965 0.5832 0.6105 0.635 0.6263 0.6798
C3 0.3148 0.5214 0.4704 0.5366 0.1898 0.5455 0.5584 0.7788 0.6355 0.6812 0.7061 0.8408 0.691 0.7361
C4 0.3907 0.4699 0.5096 0.5122 0.5342 0.5067 0.5359 0.7668 0.6726 0.6871 0.6829 0.6685 0.6875 0.6582
D D1 0.3802 0.5864 0.5549 0.639 0.5939 0.6409 0.6557 1.3821 1.056 1.0222 1.0755 0.9314 0.9761 0.8808
D2 0.3409 0.4587 0.4835 0.4693 0.4602 0.4463 0.5 0.8143 0.6772 0.6527 0.626 0.6289 0.6107 0.6018
D3 0.4949 0.4894 0.4944 0.4953 0.4974 0.4918 0.5093 0.7501 0.7909 0.7562 0.7642 0.7519 0.7583 0.7476
E E1 0.6265 0.5492 0.5574 0.5473 0.6175 0.5151 0.5687 0.8269 0.8543 0.7953 0.826 0.8576 0.8085 0.8333
E2 0.637 0.4516 0.5336 0.5001 0.5242 0.5405 0.5909 0.6182 0.6375 0.6939 0.7061 0.7509 0.6603 0.688
E3 0.8866 0.6395 0.5901 0.6935 0.9012 0.6109 0.6427 0.2779 0.2281 0.2403 0.2162 0.2576 0.2272 0.3219
E4 0.3538 0.4141 0.3837 0.3766 0.2668 0.3642 0.3963 0.7291 0.6652 0.7396 0.7458 0.8172 0.7511 0.6583
F F1 0.5465 0.6333 0.573 0.5946 0.6612 0.5505 0.5813 0.4485 0.3883 0.4914 0.4183 0.5323 0.4542 0.4474
F2 0.9159 0.8015 0.7855 0.7571 0.9168 0.8149 0.8699 0.1926 0.2949 0.2816 0.2889 0.2386 0.2361 0.219
G G1 0.3176 0.3478 0.3499 0.3509 0.2423 0.3562 0.3653 0.5286 0.5853 0.5582 0.5226 0.5314 0.5605 0.4426
G2 0.4435 0.5205 0.5425 0.552 0.5468 0.5425 0.6015 0.3744 0.3556 0.362 0.3606 0.3611 0.3429 0.3375
H H1 0.6434 0.5938 0.6115 0.6027 0.5387 0.6147 0.6661 0.6799 0.6505 0.7016 0.6678 0.754 0.6744 0.64
H2 0.218 0.2531 0.2904 0.2825 0.3307 0.3013 0.3408 0.6118 0.6589 0.6255 0.6395 0.6424 0.6489 0.6113
H3 0.2108 0.2531 0.2557 0.27 0.1248 0.3041 0.3702 0.7409 0.6796 0.8158 0.8433 0.7958 0.7698 0.643
H4 0.3606 0.3348 0.3316 0.3571 0.3857 0.3826 0.3907 0.9247 0.9812 0.8894 0.922 0.9713 0.95 0.8726
I I1 0.3112 0.449 0.4408 0.4615 0.5064 0.5123 0.521 0.6878 0.544 0.4421 0.5498 0.5374 0.4869 0.4923
I2 0.3416 0.4568 0.4528 0.4387 0.0118 0.4886 0.4027 0.1361 0.1422 0.122 0.1201 0.1894 0.1696 0.168
I3 0.2396 0.2546 0.2017 0.2368 0.0695 0.2713 0.295 0.9867 0.9717 0.9977 1.0487 1.0525 0.9803 0.9284
I4 0.2478 0.2825 0.3378 0.2976 0.277 0.3034 0.3669 0.7782 0.7626 0.7896 0.8789 0.8413 0.7527 0.8692
I5 0.3749 0.3382 0.3636 0.3713 0.341 0.3782 0.4274 0.8734 0.8886 0.8746 0.8709 0.9011 0.8412 0.8342
I6 0.543 0.4741 0.4656 0.4769 0.4717 0.5177 0.5222 0.5353 0.5895 0.5937 0.5297 0.5948 0.5229 0.5283
I7 0.1056 0.2003 0.2276 0.2781 0.2518 0.2501 0.2926 1.9505 1.8113 1.8194 1.9461 1.8334 1.9148 1.9197
win/tie/loss 2/1/28 0/7/24 0/8/23 0/7/24 4/6/21 2/9/20 2/6/23 4/9/18 2/5/24 2/4/25 0/4/27 1/10/20
Tab.4  Comparison on top 50 selected features with other feature learning methods
1 K Sriram , P A Insel . G protein-coupled receptors as targets for spproved drugs: how many targets and how many drugs?. Molecular Pharmacology, 2018, 93( 4): 251– 258
2 A S Hauser , M M Attwood , M Raskandersen , H B Schioth , D E Gloriam . Trends in GPCR drug discovery : new agents, targets and indications. Nature Reviews Drug Discovery, 2017, 16( 12): 829– 842
3 H M Berman , J Westbrook , Z Feng , G Gilliland , T Bhat , H Weissig , I N Shindyalov , P E Bourne . The protein data bank. Nucleic Acids Research, 2010, 28 : 235– 242
4 W K B Chan , H Zhang , J Yang , J R Brender , J Hur , A Ozgur , Y Zhang . GLASS: a comprehensive database for experimentally validated GPCR-ligand associations. Bioinformatics, 2015, 31( 18): 3035– 3042
5 L C Blum , J Reymond . 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. Journal of the American Chemical Society, 2009, 131( 25): 8732– 8733
6 C-C Wang , Y Zhao , X Chen . Drug-pathway association prediction: from experimental results to computational models. Briefings in Bioinformatics, 2020,
7 D Lee . CONET: a virtual human system-centered platform for drug discovery. Frontiers of Computer Science, 2018, 12( 1): 1– 3
8 A Cherkasov , E N Muratov , D Fourches , A Varnek , I I Baskin , M T D Cronin , J C Dearden , P Gramatica , Y C Martin , R Todeschini . QSAR modeling: Where have you been? Where are you going to?. Journal of Medicinal Chemistry, 2014, 57( 12): 4977– 5010
9 A Ceretomassague , M J Ojeda , C Valls , M Mulero , S Garciavallve , G Pujadas . Molecular fingerprint similarity search in virtual screening. Methods, 2015, 71 : 58– 63
10 J L Melville , E K Burke , J D Hirst . Machine learning in virtual screening. Combinatorial Chemistry High Throughput Screening, 2009, 12( 4): 332– 343
11 I Wallach , M Dzamba , A Heifets . AtomNet: A deep convolutional neural network for bioactivity prediction in structure-based drug discovery. Mathematische Zeitschrift, 2015, 47( 1): 34– 46
12 Winkler D A, Le T C. Performance of deep and shallow neural networks, the universal approximation theorem, activity cliffs, and QSAR. Molecular Informatics, 2016, 36(1−2)
13 A Lavecchia . Deep learning in drug discovery: opportunities, challenges and future prospects. Drug Discovery Today, 2019, 24( 10): 2017– 2032
14 B Ramsundar , B Liu , Z Wu , A Verras , M Tudor , R P Sheridan , V Pande . Is multitask deep learning practical for pharma?. Journal of Chemical Information and Modeling, 2017, 57( 8): 2068– 2076
15 Y Xu , J Ma , A Liaw , R P Sheridan , V Svetnik . Demystifying multitask deep neural networks for quantitative structure–activity relationships. Journal of Chemical Information and Modeling, 2017, 57( 10): 2490– 2504
16 T Unterthiner, A Mayr, G Klambauer, M Steijaert, J K Wegner, H Ceulemans, S Hochreiter. Deep learning as an opportunity in virtual screening. In: Proceedings of the Deep Learning Workshop at NIPS. 2014, 1– 9
17 J Ma , R Sheridan , A Liaw , G Dahl , V Svetnik . Deep neural nets as a method for quantitative structure–activity relationships. Journal of Chemical Information and Modeling, 2015, 55( 2): 263– 274
18 D Duvenaud , D Maclaurin , J Aguilera-Iparraguirre , R Gómez-Bombarelli , T Hirzel , A Aspuru-Guzik , R Adams . Convolutional networks on graphs for learning molecular fingerprints. Advances in Neural Information Processing Systems (NIPS), 2015,
19 J Wu , Q Zhang , W Wu , T Pang , H Hu , W K B Chan , X Ke , Y Zhang . WDL-RF: predicting bioactivities of ligand molecules acting with G protein-coupled receptors by combining weighted deep learning and random forest. Bioinformatics, 2018, 34 : 2271– 2282
20 J Wu , B Liu , W K B Chan , W Wu , T Pang , H Hu , S Yan , X Ke , Y Zhang . Precise modelling and interpretation of bioactivities of ligands targeting G protein-coupled receptors. Bioinformatics, 2019, 35 : i324– i332
21 G E Dahl , N Jaitly , R Salakhutdinov . Multi-task neural networks for QSAR predictions. Computer Science, 2014,
22 L Chen , K Shao , X Long , L Wang . Multi-task regression learning for survival analysis via prior information guided transductive matrix completion. Frontiers of Computer Science, 2020, 14( 5): 97– 110
23 J Wu , Y Sun , W K B Chan , Y Zhu , W Zhu , W Huang , H Hu , S Yan , T Pang , X Ke . Homologous G protein-coupled receptors boost the modeling and interpretation of bioactivities of ligand molecules. Journal of Chemical Information and Modeling, 2020, 60( 3): 1865– 1875
24 R S Simoes , V G Maltarollo , P R Oliveira , K M Honorio . Transfer and multi-task learning in QSAR modeling: advances and challenges. Frontiers in Pharmacology, 2018, 9 : 74–
25 J Chen, J Liu, J Ye. Learning incoherent sparse and low-rank patterns from multiple tasks. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2010, 1179−1187
26 A M Bairoch , R Apweiler , C H Wu , W C Barker , B Boeckmann , S Ferro , E Gasteiger , H Huang , R Lopez , M Magrane . The universal protein resource (UniProt). Nucleic Acids Research, 2004, 33 : 154– 159
27 Y Wang , J Xiao , TO Suzek , J Zhang , J Wang , S H Bryant . PubChem: a public information system for analyzing bioactivities of small molecules. Nucleic Acids Research, 2009, 37 : 623– 633
28 Y Nesterov . Introductory lectures on convex optimization: a basic course. 1st ed. Boston: Springer Publishing Company, 2014,
29 J Zhou , J Chen , J Ye . MALSAR: multi-task learning via structural regularization. Arizona State University, 2011, 21 :
30 J Zhou, J Liu, V A Narayan, J Ye. Modeling disease progression via fused sparse group lasso. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2012, 1095−1103
31 Y Fang , T P Kenakin , C Liu . Editorial: orphan GPCRs as emerging drug targets. Frontiers in Pharmacology, 2015, 6 : 295–
32 L Zhang , H-P Nothacker , L Bohn , O Civelli . Pharmacological characterization of a selective agonist for Bombesin Receptor Subtype-3. Biochemical and Biophysical Research Communications, 2009, 387( 2): 283– 288
[1] Yuling MA, Chaoran CUI, Jun YU, Jie GUO, Gongping YANG, Yilong YIN. Multi-task MIML learning for pre-course student performance prediction[J]. Front. Comput. Sci., 2020, 14(5): 145313-.
[2] Hao ZHENG,Xin GENG. Facial expression recognition via weighted group sparsity[J]. Front. Comput. Sci., 2017, 11(2): 266-275.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed