Citation: | WU Yuanyuan, WANG Zhishe, WANG Junyao, SHAO Wenyu, CHEN Yanlin. Infrared and Visible Image Fusion Using Attention- Based Generative Adversarial Networks[J]. Infrared Technology , 2022, 44(2): 170-178. |
[1] |
MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
|
[2] |
LI S, KANG X, FANG L, et al. Pixel-level image fusion: a survey of the state of the art[J]. Information Fusion, 2017, 33: 100-112. DOI: 10.1016/j.inffus.2016.05.004
|
[3] |
LIU Y, CHEN X, WANG Z, et al. Deep learning for pixel-level image fusion: Recent advances and future prospects[J]. Information Fusion, 2018, 42: 158-173. DOI: 10.1016/j.inffus.2017.10.007
|
[4] |
LI S, YANG B, HU J. Performance comparison of different multi-resolution transforms for image fusion[J]. Information Fusion, 2011, 12(2): 74-84. DOI: 10.1016/j.inffus.2010.03.002
|
[5] |
ZHANG Q, LIU Y, Rick S Blum, et al. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review[J]. Information Fusion, 2018, 40: 57-75. DOI: 10.1016/j.inffus.2017.05.006
|
[6] |
ZHANG Xiaoye, MA Yong, ZHANG Ying, et al. Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition[J]. Journal of the Optical Society of America A Optics Image Science & Vision, 2017, 34(8): 1400-1410.
|
[7] |
YU L, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164. DOI: 10.1016/j.inffus.2014.09.004
|
[8] |
HAN J, Pauwels E J, P De Zeeuw. Fast saliency-aware multimodality image fusion[J]. Neurocomputing, 2013, 111: 70-80. DOI: 10.1016/j.neucom.2012.12.015
|
[9] |
YIN Haitao. Sparse representation with learned multiscale dictionary for image fusion[J]. Neurocomputing, 2015, 148: 600-610. DOI: 10.1016/j.neucom.2014.07.003
|
[10] |
WANG Zhishe, YANG Fengbao, PENG Zhihao, et al. Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation[J]. Optik-International Journal for Light and Electron Optics, 2015, 126(23): 4184-4190. DOI: 10.1016/j.ijleo.2015.08.118
|
[11] |
CUI G, FENG H, XU Z, et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition[J]. Optics Communications, 2015, 341: 199-209. DOI: 10.1016/j.optcom.2014.12.032
|
[12] |
LI Q, LU L, LI Z, et al. Coupled GAN with relativistic discriminators for infrared and visible images fusion[J]. IEEE Sensors Journal, 2021, 21(6): 7458-7467. DOI: 10.1109/JSEN.2019.2921803
|
[13] |
LIU Y, CHEN X, CHENG J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018. DOI: 10.1142/S0219691318500182
|
[14] |
LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2019, 28(5): 2614-2523. DOI: 10.1109/TIP.2018.2887342
|
[15] |
XU H, MA J, JIANG J, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44(1): 502-518.
|
[16] |
HOU R. VIF-Net: an unsupervised framework for infrared and visible image fusion[J]. IEEE Transactions on Computational Imaging, 2020, 6: 640-651. DOI: 10.1109/TCI.2020.2965304
|
[17] |
HUI L A, XJW A, JK B. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73: 72-86. DOI: 10.1016/j.inffus.2021.02.023
|
[18] |
MA J, WEI Y, LIANG P, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
|
[19] |
JM A, Pl A, WEI Y A, et al. Infrared and visible image fusion via detail preserving adversarial learning[J]. Information Fusion, 2020, 54: 85-98. DOI: 10.1016/j.inffus.2019.07.005
|
[20] |
MA J, XU H, JIANG J, et al. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. DOI: 10.1109/TIP.2020.2977573
|
[21] |
GAO S, CHENG M M, ZHAO K, et al. Res2Net: A new multi-scale backbone architecture[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(2): 652-662. DOI: 10.1109/TPAMI.2019.2938758
|
[22] |
FU J, LIU J, TIAN H, et al. Dual attention network for scene segmentation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), 2020: DOI: 10.1109/cvpr.2019.00326.
|
[23] |
Nencini F, Garzelli A, Baronti S, et al. Alparone, remote sensing image fusion using the curvelet transform[J]. Information Fusion, 2007, 8(2): 143-156. DOI: 10.1016/j.inffus.2006.02.001
|
[24] |
LIU Y, WANG Z. Simultaneous image fusion and denoising with adaptive sparse representation[J]. Image Processing Iet. , 2014, 9(5): 347-357.
|
[25] |
MA J, ZHOU Z, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.
|
[26] |
YU Z A, YU L B, PENG S C, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011
|
[1] | LIAO Guangfeng, GUAN Zhiwei, CHEN Qiang. An Improved Dual Discriminator Generative Adversarial Network Algorithm for Infrared and Visible Image Fusion[J]. Infrared Technology , 2025, 47(3): 367-375. |
[2] | YUAN Hongchun, ZHANG Bo, CHENG Xin. Underwater Image Enhancement Algorithm Combining Transformer and Generative Adversarial Network[J]. Infrared Technology , 2024, 46(9): 975-983. |
[3] | LI Li, YI Shi, LIU Xi, CHENG Xinghao, WANG Cheng. Infrared Image Deblurring Based on Dense Residual Generation Adversarial Network[J]. Infrared Technology , 2024, 46(6): 663-671. |
[4] | DI Jing, REN Li, LIU Jizhao, GUO Wenqing, LIAN Jing. Infrared and Visible Image Fusion Based on Three-branch Adversarial Learning and Compensation Attention Mechanism[J]. Infrared Technology , 2024, 46(5): 510-521. |
[5] | CHEN Xin. Infrared and Visible Image Fusion Using Double Attention Generative Adversarial Networks[J]. Infrared Technology , 2023, 45(6): 639-648. |
[6] | WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177. |
[7] | FU Tian, DENG Changzheng, HAN Xinyue, GONG Mengqing. Infrared and Visible Image Registration for Power Equipments Based on Deep Learning[J]. Infrared Technology , 2022, 44(9): 936-943. |
[8] | LI Yunhong, LIU Yudong, SU Xueping, LUO Xuemin, YAO Lan. Review of Infrared and Visible Image Registration[J]. Infrared Technology , 2022, 44(7): 641-651. |
[9] | HUANG Mengtao, GAO Na, LIU Bao. Image Deblurring Method Based on a Dual-Discriminator Weighted Generative Adversarial Network[J]. Infrared Technology , 2022, 44(1): 41-46. |
[10] | LUO Di, WANG Congqing, ZHOU Yongjun. A Visible and Infrared Image Fusion Method based on Generative Adversarial Networks and Attention Mechanism[J]. Infrared Technology , 2021, 43(6): 566-574. |