DI Jing, REN Li, LIU Jizhao, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism[J]. Infrared Technology , 2024, 46(5): 510-521.
Citation: DI Jing, REN Li, LIU Jizhao, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism[J]. Infrared Technology , 2024, 46(5): 510-521.

Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism

More Information
  • Received Date: September 06, 2023
  • Revised Date: December 10, 2023
  • Available Online: May 23, 2024
  • The existing deep learning image fusion methods rely on convolution to extract features and do not consider the global features of the source image. Moreover, the fusion results are prone to texture blurring, low contrast, etc. Therefore, this study proposes an infrared and visible image fusion method with adversarial learning and compensated attention. First, the generator network uses dense blocks and the compensated attention mechanism to construct three local-global branches to extract feature information. The compensated attention mechanism is then constructed using channel features and spatial feature variations to extract global information, infrared targets, and visible light detail representations. Subsequently, a focusing dual-adversarial discriminator is designed to determine the similarity distribution between the fusion result and source image. Finally, the public dataset TNO and RoadScene are selected for the experiments and compared with nine representative image fusion methods. The method proposed in this study not only obtains fusion results with clearer texture details and better contrast, but also outperforms other advanced methods in terms of the objective metrics.

  • [1]
    YANG B, JIANG Z, PAN D, et al. Detail-aware near infrared and visible fusion with multi-order hyper-Laplacian priors[J]. Information Fusion, 2023, 99: 101851. DOI: 10.1016/j.inffus.2023.101851
    [2]
    ZHANG X, DAI X, ZHANG X, et al. Joint principal component analysis and total variation for infrared and visible image fusion[J]. Infrared Physics & Technology, 2023, 128: 104523.
    [3]
    TANG L, XIANG X, ZHANG H, et al. DIVFusion: Darkness-free infrared and visible image fusion[J]. Information Fusion, 2023, 91: 477-493. DOI: 10.1016/j.inffus.2022.10.034
    [4]
    WANG Z, SHAO W, CHEN Y, et al. Infrared and visible image fusion via interactive compensatory attention adversarial learning[J]. IEEE Transactions on Multimedia, 2022, 25: 1-12.
    [5]
    唐丽丽, 刘刚, 肖刚. 基于双路级联对抗机制的红外与可见光图像融合方法[J]. 光子学报, 2021, 50(9): 0910004. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202109035.htm

    TANG Lili, LIU Gang, XIAO Gang. Infrared and visible image fusion method based on dual-path cascade adversarial mechanism[J]. Acta Photonica Sinica, 2021, 50(9): 0910004. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202109035.htm
    [6]
    王志社, 邵文禹, 杨风暴, 等. 红外与可见光图像交互注意力生成对抗融合方法[J]. 光子学报, 2022, 51(4): 0410002. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202204029.htm

    WANG Zhishe, SHAO Wenyu, YANG Fengbao, et al. Infrared and visible image fusion method via interactive attention-based generative adversarial network [J]. Acta Photonica Sinica, 2022, 51(4): 0410002. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202204029.htm
    [7]
    RAO D, XU T, WU X. Tgfuse: An infrared and visible image fusion approach based on transformer and generative adversarial network[J]. IEEE Transactions on Image Processing, 2023, 5: 1-11.
    [8]
    XIE Q, MA L, GUO Z, et al. Infrared and visible image fusion based on NSST and phase consistency adaptive DUAL channel PCNN[J]. Infrared Physics & Technology, 2023, 131: 104659.
    [9]
    江泽涛, 蒋琦, 黄永松, 等. 基于潜在低秩表示与复合滤波的红外与弱可见光增强图像融合方法[J]. 光子学报, 2020, 49(4): 0410001. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202004019.htm

    JIANG Zetao, JIANG Qi, HUANG Yongsong, et al. Infrared and low-light-level visible light enhancement image fusion method based on latent low-rank representation and composite filtering [J]. Acta Photonica Sinica, 2020, 49(4): 0410001. https://www.cnki.com.cn/Article/CJFDTOTAL-GZXB202004019.htm
    [10]
    WANG C, WU Y, YU Y, et al. Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion [J]. Machine Vision and Applications, 2022, 33(5): 69. DOI: 10.1007/s00138-022-01322-w
    [11]
    PANG Y, ZHAO X, ZHANG L, et al. Multi-scale interactive network for salient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 9413-9422.
    [12]
    CHEN J, WU K, CHENG Z, et al. A saliency-based multiscale approach for infrared and visible image fusion[J]. Signal Processing, 2021, 182: 107936. DOI: 10.1016/j.sigpro.2020.107936
    [13]
    VESHKI F G, VOROBYOV S A. Coupled feature learning via structured convolutional sparse coding for multimodal image fusion[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022: 2500-2504.
    [14]
    LIU Y, CHEN X, CHENG J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018. DOI: 10.1142/S0219691318500182
    [15]
    Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017: 4714-4722.
    [16]
    MA J, YU W, LIANG P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
    [17]
    ZHANG H, YUAN J, TIAN X, Ma J. GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging, 2021, 7: 1134-1147. DOI: 10.1109/TCI.2021.3119954
    [18]
    SUN X, HU S, MA X, et al. IMGAN: Infrared and visible image fusion using a novel intensity masking generative adversarial network[J]. Infrared Physics & Technology, 2022, 125: 104221.
    [19]
    LE Z, HUANG J, XU H, et al. UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion[J]. Information Fusion, 2022, 88: 305-318. DOI: 10.1016/j.inffus.2022.07.013
    [20]
    Toet Alexander. The TNO Multiband image data collection[J]. Data in Brief, 2017, 15: 249-251. DOI: 10.1016/j.dib.2017.09.038
    [21]
    ZHANG H, MA J. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion[J]. International Journal of Computer Vision, 2021, 129: 2761-2785. DOI: 10.1007/s11263-021-01501-8
    [22]
    ZHAO Z, XU S, ZHANG C, et al. Bayesian fusion for infrared and visible images[J]. Signal Processing, 2020, 177: 107734. DOI: 10.1016/j.sigpro.2020.107734
    [23]
    CHEN J, LI X, WU K. Infrared and visible image fusion based on relative total variation decomposition[J]. Infrared Physics & Technology, 2022, 123: 104112.
    [24]
    XU H, MA J, JIANG J, et al. U2Fusion: a unified unsupervised image fusion network [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518. DOI: 10.1109/TPAMI.2020.3012548
    [25]
    MA J, TANG L, FAN F, et al. SwinFusion: cross-domain long-range learning for general image fusion via swin transformer [J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(7): 1200-1217. DOI: 10.1109/JAS.2022.105686
    [26]
    TANG L, YUAN J, ZHANG H, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware [J]. Information Fusion, 2022, 83: 79-92.
    [27]
    XUE W, WANG A, ZHAO L. FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information [J]. Infrared Physics & Technology, 2022, 127: 104383.
    [28]
    TAN W, ZHOU H, SONG J, et al. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition [J]. Applied optics, 2019, 58(12): 3064-3073. DOI: 10.1364/AO.58.003064
    [29]
    VESHKI FG, OUZIR N, VOROBYOV SA, et al. Multimodal image fusion via coupled feature learning[J]. Signal Processing, 2022, 200: 108637. DOI: 10.1016/j.sigpro.2022.108637
    [30]
    Panigrahy C, Seal A, Mahato N K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion[J]. Optics and Lasers in Engineering, 2020, 133: 106141.
    [31]
    Panigrahy C, Seal A, Mahato N K. Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion[J]. Neurocomputing, 2022, 514: 21-38.
  • Related Articles

    [1]LI Minglu, WANG Xiaoxia, HOU Maoxin, YANG Fengbao. An Object Detection Algorithm Based on Infrared-Visible Feature Enhancement and Fusion[J]. Infrared Technology , 2025, 47(3): 385-394.
    [2]LIAO Guangfeng, GUAN Zhiwei, CHEN Qiang. An Improved Dual Discriminator Generative Adversarial Network Algorithm for Infrared and Visible Image Fusion[J]. Infrared Technology , 2025, 47(3): 367-375.
    [3]LIU Xiaopeng, ZHANG Tao. Global-Local Attention-Guided Reconstruction Network for Infrared Image[J]. Infrared Technology , 2024, 46(7): 791-801.
    [4]CHONG Fating, DONG Zhangyu, YANG Xuezhi, ZENG Qingwang. SAR and Multispectral Image Fusion Based on Dual-channel Multi-scale Feature Extraction and Attention[J]. Infrared Technology , 2024, 46(1): 61-73.
    [5]GAO Meiling, DUAN Jin, ZHAO Weiqiang, HU Qi. Near-infrared Image Colorization Method Based on a Dilated Global Attention Mechanism[J]. Infrared Technology , 2023, 45(10): 1096-1105.
    [6]CHEN Xin. Infrared and Visible Image Fusion Using Double Attention Generative Adversarial Networks[J]. Infrared Technology , 2023, 45(6): 639-648.
    [7]WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177.
    [8]WU Yuanyuan, WANG Zhishe, WANG Junyao, SHAO Wenyu, CHEN Yanlin. Infrared and Visible Image Fusion Using Attention- Based Generative Adversarial Networks[J]. Infrared Technology , 2022, 44(2): 170-178.
    [9]HUANG Mengtao, GAO Na, LIU Bao. Image Deblurring Method Based on a Dual-Discriminator Weighted Generative Adversarial Network[J]. Infrared Technology , 2022, 44(1): 41-46.
    [10]LUO Di, WANG Congqing, ZHOU Yongjun. A Visible and Infrared Image Fusion Method based on Generative Adversarial Networks and Attention Mechanism[J]. Infrared Technology , 2021, 43(6): 566-574.

Catalog

    Article views (94) PDF downloads (46) Cited by()
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return