Please wait a minute...
Frontiers of Optoelectronics

ISSN 2095-2759

ISSN 2095-2767(Online)

CN 10-1029/TN

Postal Subscription Code 80-976

Front. Optoelectron.    2024, Vol. 17 Issue (3) : 28    https://doi.org/10.1007/s12200-024-00129-z
Low-light enhancement method with dual branch feature fusion and learnable regularized attention
Yixiang Sun1, Mengyao Ni1, Ming Zhao1, Zhenyu Yang1, Yuanlong Peng2, Danhua Cao1()
1. School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074, China
2. State Grid Information & Telecommunication Branch, Beijing 100761, China
 Download: PDF(4475 KB)  
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Restricted by the lighting conditions, the images captured at night tend to suffer from color aberration, noise, and other unfavorable factors, making it difficult for subsequent vision-based applications. To solve this problem, we propose a two-stage size-controllable low-light enhancement method, named Dual Fusion Enhancement Net (DFEN). The whole algorithm is built on a double U-Net structure, implementing brightness adjustment and detail revision respectively. A dual branch feature fusion module is adopted to enhance its ability of feature extraction and aggregation. We also design a learnable regularized attention module to balance the enhancement effect on different regions. Besides, we introduce a cosine training strategy to smooth the transition of the training target from the brightness adjustment stage to the detail revision stage during the training process. The proposed DFEN is tested on several low-light datasets, and the experimental results demonstrate that the algorithm achieves superior enhancement results with the similar parameters. It is worth noting that the lightest DFEN model reaches 11 FPS for image size of 1224×1024 in an RTX 3090 GPU.

Keywords Power inspection      Low-light enhancement      Feature fusion      Learnable regularized attention     
Corresponding Author(s): Danhua Cao   
Issue Date: 09 September 2024
 Cite this article:   
Yixiang Sun,Mengyao Ni,Ming Zhao, et al. Low-light enhancement method with dual branch feature fusion and learnable regularized attention[J]. Front. Optoelectron., 2024, 17(3): 28.
 URL:  
https://academic.hep.com.cn/foe/EN/10.1007/s12200-024-00129-z
https://academic.hep.com.cn/foe/EN/Y2024/V17/I3/28
1 J. Chen,, Z. Fu,, X. Cheng,, F. Wang,: An method for power lines insulator defect detection with attention feedback and double spatial pyramid. Electric Power Syst. Res. 218, 109175 (2023)
https://doi.org/10.1016/j.epsr.2023.109175
2 X. Tao,, D. Zhang,, Z. Wang,, X. Liu,, H. Zhang,, D. Xu,: Detection of power line insulator defects using aerial images analyzed with convolutional neural networks. IEEE Trans. Syst. Man Cybernetics Syst. 50(4), 1486–1498 (2020)
https://doi.org/10.1109/TSMC.2018.2871750
3 S. Wang,, X. Zou,, W. Zhu,, L. Zeng,: Insulator defects detection for aerial photography of the power grid using you only look once algorithm. J. Electr. Eng. Technol. 18, 3287–3300 (2023)
https://doi.org/10.1007/s42835-023-01385-3
4 D.J. Jobson,, Z. Rahman,, G.A. Woodell,: Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6, 451–462 (1997)
https://doi.org/10.1109/83.557356
5 Z. Rahman,, D.J. Jobson,, G.A. Woodell,: Multi-scale retinex for color image enhancement. In: Proceedings of 3r. IEEE International Conference on Image Processing. pp. 1003–1006. IEEE, Lausanne, Switzerland (1996)
https://doi.org/10.1109/ICIP.1996.560995
6 D.J. Jobson,, Z. Rahman,, G.A. Woodell,: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6, 965–976 (1997)
https://doi.org/10.1109/83.597272
7 X. Guo,, Y. Li,, H. Ling,: LIME: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)
https://doi.org/10.1109/TIP.2016.2639450
8 Z. Ying,, G. Li,, W. Gao,: A bio-inspired multi-exposure fusion framework for low-light image enhancement (2017) arxiv.org/abs/1711.00591
9 Z. Ying,, G. Li,, Y. Ren,, R. Wang,, W. Wang,: A new low-light image enhancement algorithm using camera response model. In: IEEE International Conference on Computer Vision Workshops (ICCVW). pp. 3015–3022. IEEE, Venice, Italy (2017)
10 X. Dong,, G. Wang,, Y. Pang,, W. Li,, J. Wen,, W. Meng,, Y. Lu,: Fast efficient algorithm for enhancement of low lighting video. In: IEEE International Conference on Multimedia and Expo. pp. 1–6. IEEE, Barcelona, Spain (2011)
11 L. Shen,, Z. Yue,, F. Feng,, Q. Chen,, S. Liu,, J. Ma,: MSR-net: low-light image enhancement using deep convolutional network (2017) arxiv.org/abs/1711.02488
12 C. Wei,, W. Wang,, W. Yang,, J. Liu,: Deep retinex decomposition for low-light enhancement. In: The British Machine Vision Conference. British Machine Vision Association, Newcastle (2018)
13 K.G. Lore,, A. Akintayo,, S. Sarkar,: LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)
https://doi.org/10.1016/j.patcog.2016.06.008
14 W. Wang,, C. Wei,, W. Yang,, J. Liu,: GLADNet: low-light enhancement network with global awareness. In: 13th IEEE International Conference on Automatic Face and Gesture Recognition. pp. 751–755. IEEE, Xi’an (2018)
15 H. Zeng,, J. Cai,, L. Li,, Z. Cao,, L. Zhang,: Learning imageadaptive 3D lookup tables for high performance photo enhancement in real-time. IEEE Trans. Pattern Analysis Mach. Intell. 44, 2058–2073 (2022)
16 C. Guo,, C. Li,, J. Guo,, C.C. Loy,, J. Hou,, S. Kwong,, R. Cong,: Zero-reference deep curve estimation for low-light image enhancement. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1777–1786. IEEE, Seattle (2020)
17 Y. Jiang,, X. Gong,, D. Liu,, Y. Cheng,, C. Fang,, X. Shen,, J. Yang,, P. Zhou,, Z. Wang,: EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)
https://doi.org/10.1109/TIP.2021.3051462
18 L. Ma,, T. Ma,, R. Liu,, X. Fan,, Z. Luo,: Toward fast, flexible, and robust low-light image enhancement. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 5627–5636. IEEE, New Orleans (2022)
19 Y. Zhang,, J. Zhang,, X. Guo,: Kindling the darkness: a practical low-light image enhancer. In: Proceedings of the 27th ACM International Conference on Multimedia. pp. 1632–1640. ACM, Nice (2019)
20 Y. Zhang,, X. Guo,, J. Ma,, W. Liu,, J. Zhang,: Beyond brightening low-light images. Int. J. Comput. Vis. 129, 1013–1037 (2021)
https://doi.org/10.1007/s11263-020-01407-x
21 L. Chen,, L. Guo,, D. Cheng,, Q. Kou,: Structure-preserving and color-restoring up-sampling for single low-light image. IEEE Trans. Circuits Syst. Video Technol. 32(4), 1889–1902 (2022)
https://doi.org/10.1109/TCSVT.2021.3086598
22 C. Chen,, Q. Chen,, J. Xu,, V. Koltun,: Learning to see in the dark. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 3291–3300. IEEE, Salt Lake City (2018)
23 S.W. Zamir,, A. Arora,, S. Khan,, M. Hayat,, F.S. Khan,, M.H. Yang,, L. Shao,: Learning enriched features for real image restoration and enhancement. In: 2020 European Conference on Computer Vision (ECCV). pp. 492–511. Springer International Publishing (2020)
https://doi.org/10.1007/978-3-030-58595-2_30
24 Y. Wang,, R. Wan,, W. Yang,, H. Li,, L.P. Chau,, A. Kot,: Low-light image enhancement with normalizing flow. Proc. AAAI Conference Artif. Intell. 36, 2604–2612 (2022)
https://doi.org/10.1609/aaai.v36i3.20162
25 H. Fu,, W. Zheng,, X. Meng,, X. Wang,, C. Wang,, H. Ma,: You do not need additional priors or regularizers in retinex-based low-light image enhancement. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 18125–18134. IEEE, Vancouver (2023)
26 Y. Cai,, H. Bian,, J. Lin,, H. Wang,, R. Timofte,, Y. Zhang,: Retinexformer: one-stage retinex-based transformer for low-light image enhancement. In: IEEE/CVF International Conference on Computer Vision (ICCV). pp. 12470–12479. IEEE, Paris (2023)
27 Q. Yang,, Y. Wu,, D. Cao,, M. Luo,, T. Wei,: A lowlight image enhancement method learning from both paired and unpaired data by adversarial training. Neurocomputing 433, 83–95 (2021)
https://doi.org/10.1016/j.neucom.2020.12.057
28 Nakamura, J. (ed.): Image sensors and signal processing for digital still cameras. Taylor & Francis, Boca Raton (2006)
29 Y. Yin,, D. Xu,, C. Tan,, P. Liu,, Y. Zhao,, Y. Wei,: CLE Diffusion: controllable light enhancement diffusion model. In: Proceedings of the 31st ACM International Conference on Multimedia. pp. 8145–8156. ACM, Ottawa (2023)
30 J. Hu,, L. Shen,, G. Sun,: Squeeze-and-excitation networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7132–7141. Salt Lake City (2018)
31 O. Ronneberger,, P. Fischer,, T. Brox,: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., and Frangi, A.F. (eds.) Medical Image Computing and Computer-Assisted Intervention. pp. 234–241. Springer International Publishing, Munich (2015)
https://doi.org/10.1007/978-3-319-24574-4_28
32 C. Zheng,, D. Cao,, C. Hu,: A similarity-guided segmentation model for garbage detection under road scene. Front. Optoelectron. 15(22), 1–17 (2022)
https://doi.org/10.1007/s12200-022-00004-9
33 Q. Zhang,, Y. Yang,: SA-net: shuffle attention for deep convolutional neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 2235–2239. IEEE, Toronto (2021)
34 Z. Wang,, E.P. Simoncelli,, A.C. Bovik,: Multiscale structural similarity for image quality assessment. In: Asilomar Conference on Signals, Systems and Computers. pp. 1398–1402. IEEE, Pacific Grove (2003)
35 L.A. Gatys,, A.S. Ecker,, M. Bethge,: Image style transfer using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2414–2423. IEEE, Las Vegas (2016)
36 W. Yang,, W. Wang,, H. Huang,, S. Wang,, J. Liu,: Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 30, 2072–2086 (2021)
https://doi.org/10.1109/TIP.2021.3050850
37 J. Cai,, S. Gu,, L. Zhang,: Learning a deep single image contrast enhancer from multi-exposure images. IEEE Trans. Image Process. 27, 2049–2062 (2018)
https://doi.org/10.1109/TIP.2018.2794218
38 I. Loshchilov,, F. Hutter,: Decoupled weight decay regularization (2019)
39 C. Li,, C. Guo,, C.L. Chen,: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Analysis Mach. Intell. 44(8), 4225–4237 (2022)
40 S. Wang,, J. Zheng,, H.M. Hu,, B. Li,: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)
https://doi.org/10.1109/TIP.2013.2261309
[1] Yuxiang Su, Xi Liang, Danhua Cao, Zhenyu Yang, Yuanlong Peng, Ming Zhao. Research on a multi-dimensional image information fusion algorithm based on NSCT transform[J]. Front. Optoelectron., 2024, 17(1): 4-.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed