Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2022, Vol. 16 Issue (4) : 164351    https://doi.org/10.1007/s11704-022-1505-y
LETTER
Multi-granularity semantic alignment distillation learning for remote sensing image semantic segmentation
Di ZHANG1,2, Yong ZHOU1,2(), Jiaqi ZHAO1,2, Zhongyuan YANG2, Hui DONG2, Rui YAO1,2, Huifang MA3
1. School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
2. Engineering Research Center of Mine Digitization, Ministry of Education of the People’s Republic of China, Xuzhou 221116, China
3. College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
 Download: PDF(1968 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Corresponding Author(s): Yong ZHOU   
Just Accepted Date: 16 May 2022   Issue Date: 15 June 2022
 Cite this article:   
Di ZHANG,Yong ZHOU,Jiaqi ZHAO, et al. Multi-granularity semantic alignment distillation learning for remote sensing image semantic segmentation[J]. Front. Comput. Sci., 2022, 16(4): 164351.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-022-1505-y
https://academic.hep.com.cn/fcs/EN/Y2022/V16/I4/164351
Fig.1  Overview of the proposed approach. The blue arrows are used for the teacher network, and the green arrows are used for the student network
Method Validating MIoU/% Testing MIoU/%
ResNet101 69.83 68.77
ResNet18 56.59 55.43
+PI 57.75(+1.16) 57.28
+PI+IFV 59.92(+3.33) 59.53
+PI+IFV+AF 60.61(+4.02) 60.51
Tab.1  The ablation results of different components on WHDLD dataset
Methods WHDLD ISPRS Potsdam
MIoU/% PA/% MIoU/% PA/%
T:Resnet101 69.83 89.06 82.16 89.06
S:Resnet18 56.59 79.42 72.80 80.67
KD [3] 56.90 79.66 73.26 81.05
SKD [4] 58.84 81.52 74.35 81.85
IFVD [5] 59.38 81.67 74.92 82.32
MGSAD 60.61 82.39 75.84 83.23
S:Resnet50 58.95 81.02 77.50 87.46
KD [3] 59.11 81.14 77.93 87.84
SKD [4] 59.71 82.07 78.86 88.76
IFVD [5] 60.92 82.96 79.47 88.93
MGSAD 61.23 83.17 80.21 89.2
Tab.2  Performance comparisons of proposed method and other methods
Fig.2  The visualization of results predicted by our proposed approach and comparison methods. (a) Input images, (b) results of w/o distillation, (c) results of KD, (d) results of SKD, (e) results of IFVD, (f) is results of MGSAD, (g) results of ground-truth. The white box in the segmentation figure is the saliency comparison area between our method and other methods
1 G Cheng , X Xie , J Han , L Guo , G S Xia . Remote sensing image scene classification meets deep learning: challenges, methods, benchmarks, and opportunities. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2020, 13: 3735– 3756
2 C Li , Y Mao , R Zhang , J Huai . A revisit to MacKay algorithm and its application to deep network compression. Frontiers of Computer Science, 2020, 14( 4): 144304
3 G Hinton O Vinyals J Dean. Distilling the knowledge in a neural network. 2015, arXiv preprint arXiv: 1503.02531
4 Y Liu K Chen C Liu Z Qin Z Luo J Wang. Structured knowledge distillation for semantic segmentation. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 2599– 2608
5 Y Wang W Zhou T Jiang X Bai Y Xu. Intra-class feature variation distillation for semantic segmentation. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 346– 362
6 Y Hou Z Ma C Liu C C Loy. Learning lightweight lane detection CNNs by Self Attention distillation. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 1013– 1021
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed