Journal of Guangdong University of Technology ›› 2024, Vol. 41 ›› Issue (01): 55-62.doi: 10.12052/gdutxb.220039

• Computer Science and Technology • Previous Articles     Next Articles

Low Illumination Image Enhancement Algorithm Based on Generative Adversarial Network

Yang Zhen-xiong, Tan Tai-zhe   

  1. School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
  • Received:2022-03-02 Online:2024-01-25 Published:2024-02-01

Abstract: Traditional deep learning-based methods have achieved promising performance for low-illumination image enhancement. However, these methods usually need to be trained on the pair-wise datasets, which are difficult to collect. Moreover, most existing enhancement methods have the problems of imperfect enhancement effect and image noise in real low illumination image enhancement. To address this, a unsupervised generative adversarial network is designed for low-illumination image enhancement, which has no requirement of training on the pair-wise datasets. The proposed network consists of two subnetworks: attentional mechanism network and enhancement network. The attentional mechanism network is used to distinguish the low-light region from the bright region of the low-illumination image, and the residual enhancement network is used to enhance the image by combining with the global-local discriminator. By doing this, a low-illumination image can be well enhanced. Extensive experimental results show that the proposed method outperforms the baseline Enlighten-GAN and Cycle-GAN for low-light image enhancement.

Key words: generative adversarial network, low-light image enhancement, attentional mechanism

CLC Number: 

  • TP391
[1] ZHOU Z, FENG Z, LIU J, et al. Single-image low-light enhancement via generating and fusing multiple sources [J]. Neural Computing and Applications, 2020, 32: 6455-6465.
[2] GHARBI M, CHEN J, BARRON J T, et al. Deep bilateral learning for real-time image enhancement [J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 118.
[3] GUO X, LI Y, LING H. LIME: Low-light image enhancement via illumination map estimation [J]. IEEE Transactions on Image Processing, 2016, 26(2): 982-993.
[4] LORE K G, AKINTAYO A, SARKAR S. LLNet: a deep autoencoder approach to natural low-light image enhancement [J]. Pattern Recognition, 2017, 61: 650-662.
[5] SHEN L , YUE Z , FENG F , et al. MSR-net: low-light image enhancement using deep convolutional network[EB/OL]. arXiv: 1711.02488 (2017-11-07) [2022-03-25]. https://doi.org/10.48550/arXiv.1711.02488.
[6] WEI C , WANG W , YANG W, et al. Deep retinex decomposition for low-light enhancement[EB/OL]. arXiv: 1808.04560 (2018-08-14) [2022-03-25]. https://doi.org/10.48550/arXiv.1808.04560.
[7] CHEN C , CHEN Q , XU J , et al. Learning to see in the dark[EB/OL]. arXiv: 1805.01934 (2018-05-04) [2022-03-25]. https://doi.org/10.48550/arXiv.1805.01934.
[8] KALANTARI N K, RAMAMOORTHI R. Deep high dynamic range imaging of dynamic scenes[J]. ACM Transactions on Graphics, 2017, 36(4): 1-12.
[9] WU X , SHAO J , GAO L , et al. Unpaired image-to-image translation from shared deep space[C]//2018 25th IEEE International Conference on Image Processing (ICIP) . Athens: IEEE, 2018: 2127-2131.
[10] LIU M Y , BREUEL T , KAUTZ J . Unsupervised image-to-image translation networks[EB/OL]. arXiv: 1703.00848 (2018-07-23) [2022-03-25]. https://doi.org/10.48550/arXiv.1703.00848.
[11] MADAM N T, KUMAR S, RAJAGOPALAN A N. Unsupervised class-specific deblurring[C]//Computer Vision–ECCV 2018: 15th European Conference. Munich: Springer International Publishing, 2018: 358-374.
[12] HUANG X, LIU M Y, BELONGIE S, et al. Multimodal unsupervised image-to-image translation[EB/OL]. arXiv: 1804.04732 (2018-08-14) [2022-03-25]. https://doi.org/10.48550/arXiv.1804.04732.
[13] CHOI Y, CHOI M, KIM M, et al. Stargan: unified generative adversarial networks for multi-domain image-to-image translation[EB/OL]. arXiv: 1711.09020 (2018-09-21) [2022-03-25]: https://doi.org/10.48550/arXiv.1711.09020.
[14] HOFFMAN J, TZENG E, PARK T, et al. CyCADA: cycle-consistent adversarial domain adaptation[EB/OL].arXiv: 1711.03213 (2017-12-29) [2022-03-25]. https://doi.org/10.48550/arXiv.1711.03213.
[15] ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[EB/OL].arXiv: 1703.10593 (2020-08-24) [2022-03-25]. https://doi.org/10.48550/arXiv.1703.10593.
[16] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks [J]. Communications of the ACM, 2020, 63(11): 139-144.
[17] GAWADE A, PANDHARKAR R, DEOLEKAR S. Gantoon: creative cartoons using generative adversarial network[C]//Information, Communication and Computing Technology: 5th International Conference, ICICCT 2020. Singapore: Springer, 2020: 222-230.
[18] HU D. An introductory survey on attention mechanisms in NLP problems[EB/OL]. arXiv: 1811.05544 (2018-11-12) [2022-03-25]. https://doi.org/10.48550/arXiv.1811.05544.
[19] WANG F, TAX D M J. Survey on the attention based RNN model and its applications in computer vision[EB/OL]. arXiv: 1601.06823 (2016-01-25) [2022-03-25]. https://doi.org/10.48550/arXiv.1601.06823.
[20] JIANG X, ZHANG L, XU M, et al. Attention scaling for crowd counting[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA: IEEE. 2020: 4706-4715.
[21] HAN Y S, YOO J, YE J C. Deep residual learning for compressed sensing CT reconstruction via persistent homology analysis[EB/OL]. arXiv: 1611.06391 (2016-11-25) [2022-03-25]. https://doi.org/10.48550/arXiv.1611.06391.
[22] SHIBATA N, TANITO M, MITSUHASHI K, et al. Development of a deep residual learning algorithm to screen for glaucoma from fundus photography [J]. Scientific Reports, 2018, 8(1): 14665.
[23] KUANG D Y. On reducing negative jacobian determinant of the deformation predicted by deep registration networks[EB/OL]. arXiv: 1907.00068 (2019-06-28) [2022-03-25]. https://doi.org/10.48550/arXiv.1907.00068.
[24] XIONG W, LIU D, SHEN X, et al. Unsupervised real-world low-light image enhancement with decoupled networks. [EB/OL]. arXiv: 2005.02818 (2020-05-06) [2022-03-25]. https://doi.org/10.48550/arXiv.2005.02818.
[25] RONNEBERGER O, FISCHER P, BROX T. U-net: convolutional networks for biomedical image segmentation[EB/OL]. arXiv: 1505.04597 (2015-05-18) [2022-03-25]. https://doi.org/10.48550/arXiv.1505.04597
[26] JIANG Y, GONG X, LIU D, et al. Enlightengan: deep light enhancement without paired supervision [J]. IEEE Transactions on Image Processing, 2021, 30: 2340-2349.
[27] IIZUKA S, SIMO-SERRA E, ISHIKAWA H. Globally and locally consistent image completion [J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 107.
[28] MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks[EB/OL]. arXiv: 1611.04076 (2017-04-05) [2022-03-25]. https://doi.org/10.48550/arXiv.1611.04076.
[29] HUYNH-THU Q, GHANBARI M. Scope of validity of PSNR in image/video quality assessment [J]. Electronics letters, 2008, 44(13): 800-801.
[1] Kuang Yong-nian, Wang Feng. Video Frame Anomaly Behavior Detection Based on Foreground Area Generative Adversarial Networks [J]. Journal of Guangdong University of Technology, 2024, 41(01): 63-68,92.
[2] Wu Ya-di, Chen Ping-hua. A Music Recommendation Model Based on Users' Long and Short Term Preferences and Music Emotional Attention [J]. Journal of Guangdong University of Technology, 2023, 40(04): 37-44.
[3] Ye Wu-jian, Gao Hai-jian, Weng Shao-wei, Gao Zhi, Wang Shan-jin, Zhang Chun-yu, Liu Yi-jun. A Two-stage Effect Rendering Method for Art Font Based on CGAN Network [J]. Journal of Guangdong University of Technology, 2019, 36(03): 47-55.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!