Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2022, Vol. 16 Issue (2) : 162702    https://doi.org/10.1007/s11704-020-9526-x
RESEARCH ARTICLE
Defocus blur detection using novel local directional mean patterns (LDMP) and segmentation via KNN matting
Awais KHAN1, Aun IRTAZA2, Ali JAVED3,4(), Tahira NAZIR1, Hafiz MALIK2, Khalid Mahmood MALIK4, Muhammad Ammar KHAN1
1. Department of Computer Science, University of Engineering and Technology, Taxila 47050, Pakistan
2. Department of Electrical and Computer Engineering, University of Michigan, Dearborn, MI 48128, USA
3. Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
4. Department of Computer Science and Engineering, Oakland University, MI 48309, USA
 Download: PDF(10196 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Detection and segmentation of defocus blur is a challenging task in digital imaging applications as the blurry images comprise of blur and sharp regions that wrap significant information and require effective methods for information extraction. Existing defocus blur detection and segmentation methods have several limitations i.e., discriminating sharp smooth and blurred smooth regions, low recognition rate in noisy images, and high computational cost without having any prior knowledge of images i.e., blur degree and camera configuration. Hence, there exists a dire need to develop an effective method for defocus blur detection, and segmentation robust to the above-mentioned limitations. This paper presents a novel features descriptor local directional mean patterns (LDMP) for defocus blur detection and employ KNN matting over the detected LDMP-Trimap for the robust segmentation of sharp and blur regions. We argue/hypothesize that most of the image fields located in blurry regions have significantly less specific local patterns than those in the sharp regions, therefore, proposed LDMP features descriptor should reliably detect the defocus blurred regions. The fusion of LDMP features with KNN matting provides superior performance in terms of obtaining high-quality segmented regions in the image. Additionally, the proposed LDMP features descriptor is robust to noise and successfully detects defocus blur in high-dense noisy images. Experimental results on Shi and Zhao datasets demonstrate the effectiveness of the proposed method in terms of defocus blur detection. Evaluation and comparative analysis signify that our method achieves superior segmentation performance and low computational cost of 15 seconds.

Keywords defocus blur detection      local directional mean patterns      image matting      sharpness metrics      blur segmentation     
Corresponding Author(s): Ali JAVED   
Just Accepted Date: 27 May 2020   Issue Date: 18 September 2021
 Cite this article:   
Awais KHAN,Aun IRTAZA,Ali JAVED, et al. Defocus blur detection using novel local directional mean patterns (LDMP) and segmentation via KNN matting[J]. Front. Comput. Sci., 2022, 16(2): 162702.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-020-9526-x
https://academic.hep.com.cn/fcs/EN/Y2022/V16/I2/162702
Fig.1  Flow diagram of blur detection and segmentation
Fig.2  Impact of LDTP Threshold on the input image to preserve sharpness map
Fig.3  Local directional triplicate pattern computation
Fig.4  Patterns comparison of LTP Upper and LTP Lower
Fig.5  Local directional mean pattern computation
Fig.6  Flood filling computation
Fig.7  LDMP-trimap output
Fig.8  KNN segmented results
Fig.9  Comparison of KNN matting and alpha matting
Precision Recall F1-score
SHI Dataset Proposed blur detection 0.875 0.942 0.907
Proposed blur segmentation 0.881 0.944 0.912
DUT Dataset Proposed blur detection 0.910 0.867 0.88
Proposed blur segmentation 0.907 0.898 0.89
Tab.1  Performance evaluation of the proposed methods
Fig.10  Results achieved by different blur detection and segmentation methods on Shi dataset
Fig.11  Results achieved by different blur detection and segmentation methods on Zhao dataset
Fig.12  Precision-Recall Curves for detection and segmentation on Shi-dataset: (a) Precision recall curve of blur detection via LDMP; (b) precision-recall curve of blur segmentation via KNN matting
Fig.13  Precision-Recall Curves for detection and segmentation on Zhao-dataset: (a) Precision recall curve of blur detection via LDMP; (b) precision-recall curve of blur segmentation via KNN matting
Fig.14  F1-score comparison of proposed method with all comparative methods on Zhao-dataset
Fig.15  F1-score comparison of proposed method with all comparative methods on Shi-dataset
Sharpness metric Avg. runtime
Gradient histogram span (mGHS) [ 6, 19] 273.19 s
Local Binary pattern (mlbp) [ 21] 3.55 s
Total variation (mTV) [ 7] 50.s00 s
Singular value decomposition (mSVD) [ 8] 38.66 s
Average power spectrum slope (mAPS) [ 6] 22.89 s
Proposed LDMP (Shi-Dataset) 15.00 s
Proposed LDMP (Zhao-Dataset) 10 s
Tab.2  Time complexity analysis of different blur detection methods
Blur segmentation Avg. runtime
Proposed KNN (Zhao-Dataset) 10 s
Proposed KNN (Shi-Dataset) 15 s
Shi [ 6] 705.27 s
LBP [ 21] 40 ms
Tang [ 32] 11.6 h
Zhao [ 35] 5 days
Su [ 8] 37 s
Shi [ 28] 38.36 s
Vu [ 7] 19.18 s
Zhuo and Sim [ 9] 20.59 s
Zhu [ 10] 12 min
Tab.3  Time complexity analysis of different segmentation methods
Fig.16  LBP vs. LDMP comparison
Fig.17  LDMP and KNN results on motion blur
Fig.18  Limitations of the proposed system
1 Krishnamurthy B, Sarkar M. Deep-learning network architecture for object detection. U.S. Patents 10, 019, 655, 2018
2 Price B L, Schiller S, Cohen S, Xu N. Image matting using deep learning. Ed: Google Patents, 2019
3 C Liu , W Liu , W Xing . A weighted edge-based level set method based on multi-local statistical information for noisy image segmentation. Journal of Visual Communication and Image Representation, 2019, 59 : 89– 107
4 Gast J, Roth S. Deep video deblurring: the devil is in the details. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019
5 G Gvozden , S Grgic , M Grgic . Blind image sharpness assessment based on local contrast map statistics. Journal of Visual Communication and Image Representation, 2018, 50 : 145– 158
6 Shi J, Xu L, Jia J. Discriminative blur detection features. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. 2014, 2965–2972
7 C T Vu , T D Phan , D M Chandler . S 3: a spectral and spatial measure of local perceived sharpness in natural images. IEEE Transactions on Image Processing, 2011, 21( 3): 934– 945
8 Su B, Lu S, Tan C L. Blurred image region detection and classification. In: Proceedings of the 19th ACM International Conference on Multimedia, Scottsdale, Arizona. 2011
9 S Zhuo , T Sim . Defocus map estimation from a single image. Pattern Recognition, 2011, 44( 9): 1852– 1858
https://doi.org/10.1016/j.patcog.2011.03.009
10 X Zhu , S Cohen , S Schiller , P Milanfar . Estimating spatially varying defocus blur from a single image. IEEE Transactions on Image Processing, 2013, 22( 12): 4879– 4891
11 C Tang , C Hou , Z Song . Defocus map estimation from a single image via spectrum contrast. Optics letters, 2013, 38( 10): 1706– 1708
12 X Zhang , R Wang , X Jiang , W Wang , W Gao . Spatially variant defocus blur map estimation and deblurring from a single image. Journal of Visual Communication and Image Representation, 2016, 35 : 257– 264
13 Wing T Y, Brown M S. Single image defocus map estimation using local contrast prior. In: Proceedings of the 16th IEEE International Conference on Image Processing. 2009, 1797–1800
14 Q Shan , J Jia , A Agarwala . High-quality motion deblurring from a single image. ACM Transactions on Graphics (Tog), 2008, 27( 3): 1– 10
15 Rajabzadeh T, Vahedian A, Pourreza H. Static object depth estimation using defocus blur levels features. In: Proceedings of the 6th International Conference on Wireless Communications Networking and Mobile Computing. 2010, 1–4
16 Mavridaki E, Mezaris V. No-reference blur assessment in natural images using Fourier transform and spatial pyramids. In: Proceedings of IEEE International Conference on Image Processing (ICIP). 2014, 566–570
17 J Lin , X Ji , W Xu , Q Dai . Absolute depth estimation from a single defocused image. IEEE Transactions on Image Processing, 2013, 21( 11): 4545– 4550
18 C Zhou , S Lin , S K Nayar . Coded aperture pairs for depth from defocus and defocus deblurring. International Journal of Computer Vision, 2011, 93( 1): 53– 72
19 Liu R, Li Z, Jia J. Image partial blur detection and classification. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. 2008, 1–8
20 C Tang , J Wu , Y Hou , P Wang , W Li . A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Processing Letters, 2016, 23( 11): 1652– 1656
21 X Yi , M Eramian . LBP-Based Segmentation of Defocus Blur. IEEE Transactions on Image Processing, 2016, 25( 4): 1626– 1638
22 R Hassen , Z Wang , M M Salama . Image sharpness assessment based on local phase coherence. IEEE Transactions on Image Processing, 2013, 22( 7): 2798– 2810
23 H Xiao , W Lu , R Li , N Zhong , Y Yeung , J Chen . Defocus blur detection based on multiscale SVD fusion in gradient domain. Journal of Visual Communication and Image Representation, 2019, 59 : 52– 61
24 Chakrabarti A, Zickler T, Freeman W T. Analyzing spatially-varying blur. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2010
25 Golestaneh S A, Karam L J. Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5800–5809
26 Zhao W, Zheng B, Lin Q, Lu H. Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 8905–8913
27 Zhang Y, Hirakawa K. Blur processing using double discrete wavelet transform. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2013, 1091–1098
28 Shi J, Xu L, Jia J. Just noticeable defocus blur detection and estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 657–665
29 Y Pang , H Zhu , X Li , X Li . Classifying discriminative features for blur detection. IEEE Transactions on Cybernetics, 2015, 46( 10): 2220– 2227
30 Kim B, Son H, Park S J, Cho S, Lee S. Defocus and Motion Blur Detection with Deep Contextual Features. In: Proceedings of Computer Graphics Forum. 2018, 277−288
31 Park J, Tai Y W, Cho D, Kweon I S. A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 1736–1745
32 Tang C, Zhu X, Liu X, Wang L, Zomaya A. DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 2700–2709
33 Nigam S, Singh R, Misra A. Local binary patterns based facial expression recognition for efficient smart applications. In: Hassanien A, Elhoseny M, Ahmed S, Singh A, eds. Security in Smart Cities: Models, Applications and Challenges. Springer, Cham, 2019, 297−322
34 G S Kumar , P K Mohan . Local mean differential excitation pattern for content based image retrieval. SN Applied Sciences, 2019, 1( 1): 1– 10
35 Zhao W, Zhao F, Wang D, Lu H. Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. In: Proceedings of the IEEE Conference on Computer vision and Pattern Recognition. 2018, 3080–3088
[1] Yihui LIANG, Han HUANG, Zhaoquan CAI. PSO-ACSC: a large-scale evolutionary algorithm for image matting[J]. Front. Comput. Sci., 2020, 14(6): 146321-.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed