Abstract:
An unsupervised generative adversarial network is proposed to solve the problems of boundary blurring and information loss in the focusing and defocusing regions of existing multi-focus image fusion methods. By constructing a generator with complex attention feature extraction module, the global and local features of the source image can be fully extracted and the learning of image color information can be strengthened. The combined gradient of the source image is used as the input of the discriminator to enhance the extraction of texture details. Combined with structural similarity and peak signal-to-noise ratio, the structure perception loss is proposed to further improve the quality of the fused image. The experimental results of Lytro data set show that, compared with 7 representative fusion algorithms, this method achieves good fusion performance in both subjective and objective evaluation, among which the indices PSNR, AG, SF, and EI reach 52.38, 8.25, 22.74, and 85.96, respectively, representing improvements of 5.5%, 2.2%, 1.4%, and 2.1% over the second-best algorithm.