Please wait a minute...
Quantitative Biology

ISSN 2095-4689

ISSN 2095-4697(Online)

CN 10-1028/TM

Postal Subscription Code 80-971

Quant. Biol.    2022, Vol. 10 Issue (3) : 239-252    https://doi.org/10.15302/J-QB-021-0272
RESEARCH ARTICLE
Lesion region segmentation via weakly supervised learning
Ran Yi1, Rui Zeng1, Yang Weng1, Minjing Yu2(), Yu-Kun Lai3, Yong-Jin Liu1()
1. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
2. College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
3. School of Computer Science and Informatics, Cardiff University, Cardiff, CF10 3AT, United Kingdom
 Download: PDF(6383 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Background: Image-based automatic diagnosis of field diseases can help increase crop yields and is of great importance. However, crop lesion regions tend to be scattered and of varying sizes, this along with substantial intra-class variation and small inter-class variation makes segmentation difficult.

Methods: We propose a novel end-to-end system that only requires weak supervision of image-level labels for lesion region segmentation. First, a two-branch network is designed for joint disease classification and seed region generation. The generated seed regions are then used as input to the next segmentation stage where we design to use an encoder-decoder network. Different from previous works that use an encoder in the segmentation network, the encoder-decoder network is critical for our system to successfully segment images with small and scattered regions, which is the major challenge in image-based diagnosis of field diseases. We further propose a novel weakly supervised training strategy for the encoder-decoder semantic segmentation network, making use of the extracted seed regions.

Results: Experimental results show that our system achieves better lesion region segmentation results than state of the arts. In addition to crop images, our method is also applicable to general scattered object segmentation. We demonstrate this by extending our framework to work on the PASCAL VOC dataset, which achieves comparable performance with the state-of-the-art DSRG (deep seeded region growing) method.

Conclusion: Our method not only outperforms state-of-the-art semantic segmentation methods by a large margin for the lesion segmentation task, but also shows its capability to perform well on more general tasks.

Keywords weakly supervised learning      lesion segmentation      disease detection      semantic segmentation      agriculture     
Corresponding Author(s): Minjing Yu,Yong-Jin Liu   
About author:

Tongcan Cui and Yizhe Hou contributed equally to this work.

Just Accepted Date: 06 August 2021   Online First Date: 23 September 2021    Issue Date: 08 October 2022
 Cite this article:   
Ran Yi,Rui Zeng,Yang Weng, et al. Lesion region segmentation via weakly supervised learning[J]. Quant. Biol., 2022, 10(3): 239-252.
 URL:  
https://academic.hep.com.cn/qb/EN/10.15302/J-QB-021-0272
https://academic.hep.com.cn/qb/EN/Y2022/V10/I3/239
Scheme Mean Blk. measles Blk. rot Leaf blight Healthy
[3] 97.62 94.07 95.31 99.53 100
Ours 99.63 99.28 99.58 100 100
Tab.1  Comparison of classification accuracy on PlantVillage (%)
Schemes Mean Black measles Black rot Leaf blight
K-means 24.24 30.69 25.15 16.87
DAM+CRF 24.86 40.99 21.19 12.40
SEC 25.40 38.03 30.24 7.94
DSRG 25.50 34.95 27.03 16.68
Ours 55.49 61.62 55.83 49.02
Tab.2  Comparison of segmentation performance of different methods (measured by IoU (%)) on PlantVillage dataset
Fig.1  Qualitative results of different methods for lesion segmentation.
Fig.2  More qualitative results of lesion segmentation on grape black measles disease.
Fig.3  More qualitative results of lesion segmentation on grape black rot disease.
Fig.4  More qualitative results of lesion segmentation on grape leaf blight disease.
Schemes Mean Black measles Black rot Leaf blight
no PSS 26.44 30.00 26.63 6.69
no LF 43.31 60.21 57.38 12.33
Ours 55.49 61.62 55.83 49.02
Tab.3  Ablation studies on lesion segmentation (in mIoU (%))
Schemes U-net DeepLabv3+
PSS 0.44 2.30
PSS+FCS 0.57 4.64
PSS+CRF 0.57 7.86
SRG+FCS 2.55 9.33
SRG+CRF 2.55 11.0
Tab.4  Training time analysis over four loss terms on two networks
Methods SEC DSRG (VGG) DSRG (ResNet) Ours
Bkg 82.4 87.5 88.0 88.2
Plane 62.9 73.1 78.6 76.5
Bike 26.4 28.4 35.4 36.1
Bird 61.6 75.4 76.2 75.6
Boat 27.6 39.5 42.7 46.4
Bottle 38.1 54.5 62.0 66.8
Bus 66.6 78.2 80.0 81.9
Car 62.7 71.3 68.4 70.6
Cat 75.2 80.6 81.5 81.7
Chair 22.1 25.0 22.6 20.8
Cow 53.5 63.3 77.5 80.5
Table 28.3 25.4 38.6 38.8
Dog 65.8 77.8 73.3 70.9
Horse 57.8 65.4 75.0 74.7
Motor 62.3 65.2 72.6 72.9
Person 52.5 72.8 69.0 69.0
Plant 32.5 41.2 39.5 39.1
Sheep 62.6 74.3 72.8 81.7
Sofa 32.1 34.1 34.9 32.3
Train 45.4 52.1 61.1 60.8
Tv 45.3 53.0 52.2 52.1
mIoU 57.0 59.0 62.0 62.7
Tab.5  Per class results on VOC 2012 validation set, evaluated in terms of mean IoU (%)
Fig.5  Comparison of DSRG Huang et al. (2018) and our method for weakly supervised semantic segmentation on the PASCAL VOC dataset.
Methods Backbone Val set Test set
SEC VGG16 50.7 51.7
DSRG (VGG) VGG16 59.0 60.4
DSRG (ResNet) DeepLab-v2-ResNet101 60.2 63.2
FickleNet DeepLab-v2-ResNet101 64.9 65.3
Ours DeepLab-v3-ResNet101&Decoder 62.7 62.6
Tab.6  Comparison on VOC 2012 validation and test set, evaluated in terms of mean IoU (%)
Fig.6  The two-branch network for crop disease classification and seed region generation, which is based on pre-trained VGG-16 network.
Fig.7  Some examples of disease and healthy seed regions generated by DAM in the two-branch network illustrated in Fig.6.
Fig.8  The proposed weakly-supervised segmentation network.
1 R. N. Strange, P. Scott, ( 2005). Plant disease: a threat to global food security. Annu. Rev. Phytopathol., 43 : 83– 116
https://doi.org/10.1146/annurev.phyto.43.113004.133839 pmid: 16078878
2 F., Geiger, J., Bengtsson, F., Berendse, W. W., Weisser, M., Emmerson, M. B., Morales, P., Ceryngier, J., Liira, T., Tscharntke, C. Winqvist, et al.. ( 2010). Persistent negative effects of pesticides on biodiversity and biological control potential on european farmland. Basic Appl. Ecol., 11 : 97– 105
https://doi.org/10.1016/j.baae.2009.12.001
3 Aravind, K. R., Raja, P., Aniirudh, R., Mukesh, K. V., Ashiwin, R. and Vikas, G. (2018) Grape crop disease classification using transfer learning approach. In: Proc. ISMAC-CVB, pp. 1623–1633
4 C., DeChant, T., Wiesner-Hanks, S., Chen, E. L., Stewart, J., Yosinski, M. A., Gore, R. J. Nelson, ( 2017). Automated identification of northern leaf blight-infected maize plants from field imagery using deep learning. Phytopathology, 107 : 1426– 1432
https://doi.org/10.1094/PHYTO-11-16-0417-R pmid: 28653579
5 Pound, M. P., Atkinson, J. A., Wells, D. M., Pridmore, T. P. and French, A. P. (2017) Deep learning for multi-task plant phenotyping. In: Proc. ICCV Workshops, pp. 2055–2063
6 Abdu, A. M., Mokji, M. M. and Sheikh, U. U. (2019) Deep learning for plant disease identification from disease region images. In: Proc. ICIRA, pp. 65–75
7 S. P., Mohanty, D. P. Hughes, ( 2016). Using deep learning for image-based plant disease detection. Front Plant Sci, 7 : 1419
https://doi.org/10.3389/fpls.2016.01419 pmid: 27713752
8 Zabawa, L., Kicherer, A., Klingbeil, L., Milioto, A., Topfer, R., Kuhlmann, H. and Roscher, R. (2019) Detection of single grapevine berries in images using fully convolutional neural networks. In: Proc. CVPR Workshops
9 S., Zhang, Z. You, ( 2019). Plant disease leaf image segmentation based on superpixel clustering and EM algorithm. Neural Comput. Appl., 31 : 1225– 1232
https://doi.org/10.1007/s00521-017-3067-8
10 Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2012) Imagenet classification with deep convolutional neural networks, In: Proc. NeurIPS, pp. 1097–1105
11 Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and Rabinovich, A. (2015) Going deeper with convolutions. In: Proc. CVPR, pp. 1–9
12 Krause, J., Baek, K. and Lim, L. (2019) A guided multi-scale categorization of plant species in natural images. In: Proc. CVPR Workshops
13 Kumar, N., Belhumeur, P. N., Biswas, A., Jacobs, D. W., Kress, W. J., Lopez, I. C. and Soares, J. V. (2012) Leafsnap: A computer vision system for automatic plant species identification. In: Proc. ECCV, pp. 502–516
14 A., Fuentes, S., Yoon, S. C. Kim, D. Park, ( 2017). A robust deep-learning-based detector for real-time tomato plant diseases and pests recognition. Sensors (Basel), 17 : 2022
https://doi.org/10.3390/s17092022 pmid: 28869539
15 Simonyan, K. and Zisserman, A. (2015) Very deep convolutional networks for large-scale image recognition, In: Proc. ICLR
16 He, K., Zhang, X., Ren, S. and Sun, J. (2016) Deep residual learning for image recognition. In: Proc. CVPR, pp. 770–778
17 Chen, Y., Baireddy, S., Cai, E., Yang, C. and Delp, E. J. (2019) Leaf segmentation by functional modeling. In: Proc. CVPR Workshops
18 S., Phadikar, J. Sil, A. Das, ( 2013). Rice diseases classification using feature selection and rule generation techniques. Comput. Electron. Agric., 90 : 76– 85
https://doi.org/10.1016/j.compag.2012.11.001
19 A., Johannes, A., Picon, A., Alvarez-Gila, J., Echazarra, S., Rodriguez-Vaamonde, A. D. Navajas, ( 2017). Automatic plant disease diagnosis using mobile capture devices, applied on a wheat use case. Comput. Electron. Agric., 138 : 200– 209
https://doi.org/10.1016/j.compag.2017.04.013
20 Afridi, M. J., Liu, X. and McGrath, J. M. (2014) An automated system for plant-level disease rating in real fields. In: Proc. ICPR, pp. 148–153
21 K., Lin, L., Gong, Y., Huang, C. Liu, ( 2019). Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Front Plant Sci., 10 : 155
https://doi.org/10.3389/fpls.2019.00155 pmid: 30891048
22 Zhou, B., Khosla, A., Lapedriza, À., Oliva, A. and Torralba, A. (2015) Object detectors emerge in deep scene cnns. In: Proc. ICLR
23 Zhou, B., Khosla, A., Lapedriza, À., Oliva, A. and Torralba, A. 2016. Learning deep features for discriminative localization. In: Proc. CVPR, pp. 2921–2929
24 Yu, W., Zhu, F., Boushey, C. J. and Delp, E. J. (2017) Weakly supervised food image segmentation using class activation maps. In: Proc. ICIP, pp. 1277–1281
25 Bolaños, M. and Radeva, P. (2016) Simultaneous food localization and recognition. In: Proc. ICPR, pp. 3140–3145
26 Gondal, W. M., Kohler, J. M., Grzeszick, R., Fink, G. A. and Hirsch, M. (2017) Weakly-supervised localization of diabetic retinopathy lesions in retinal fundus images. In: Proc. ICIP, pp. 2069–2073
27 Kolesnikov, A. and Lampert, C. H. (2016) Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In: Proc. ECCV, 695–711
28 Huang, Z., Wang, X., Wang, J., Liu, W. and Wang, J. (2018) Weakly-supervised semantic segmentation network with deep seeded region growing. In: Proc. CVPR, pp. 7014–7023
29 Wang, X., You, S., Li, X. and Ma, H. (2018) Weakly-supervised semantic segmentation by iteratively mining common object features. In: Proc. CVPR, pp. 1354–1362
30 Ahn, J. and Kwak, S. (2018) Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In: Proc. CVPR, pp. 4981–4990
31 Mottaghi, R., Chen, X., Liu, X., Cho, N. G., Lee, S. W., Fidler, S., Urtasun, R. and Yuille, A. (2014) The role of context for object detection and semantic segmentation in the wild. In: Proc. CVPR, pp. 891–898
32 krähenbühl, P. and Koltun, V. (2011) Efficient inference in fully connected crfs with gaussian edge potentials. In: Proc. NeurIPS, pp. 109–117
33 Lee, J., Kim, E., Lee, S., Lee, J. and Yoon, S. (2019) Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In: Proc. CVPR, pp. 5267–5276
34 Oquab, M., Bottou, L., Laptev, I. and Sivic, J. (2015) Is object localization for free? ‒ Weakly-supervised learning with convolutional neural networks. In: Proc. CVPR, pp. 685–694
35 B., Liu, Y., Zhang, D. He, ( 2018). Identification of apple leaf diseases based on deep convolutional neural networks. Symmetry (Basel), 10 : 11
https://doi.org/10.3390/sym10010011
36 Chaudhry, A., Dokania, P. K. and Torr, P. H. (2017) Discovering class-specific pixels for weakly-supervised semantic segmentation. arXiv, 1707.05821
37 Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R. B., Guadarrama, S. and Darrell, T. (2014) Caffe: Convolutional architecture for fast feature embedding. In: Proc. MM, pp. 675–678
38 Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. (2016) Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv, 1603.04467
39 Ronneberger, O., Fischer, P. and Brox, T. (2015) U-net: Convolutional networks for biomedical image segmentation. In: Proc. MICCAI, pp. 234–241
40 Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H. (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proc. ECCV, pp. 801–818
41 Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S. and Malik, J. (2011) Semantic contours from inverse detectors. In: Proc. ICCV, pp. 991–998
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed