Citation: | WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology , 2023, 45(2): 171-177. |
[1] |
MA J Y, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178. DOI: 10.1016/j.inffus.2018.02.004
|
[2] |
YU X C, GAO G Y, XU J D, et al. Remote sensing image fusion based on sparse representation[C]//2014 IEEE Geoscience and Remote Sensing Symposium, 2014: 2858-2861.
|
[3] |
ZHAO W D, LU H C. Medical image fusion and denoising with alternating sequential filter and adaptive fractional order total variation[J]. IEEE Transactions on Instrumentation and Measurement, 2017, 66(9): 2283-2294. DOI: 10.1109/TIM.2017.2700198
|
[4] |
LI Y S, TAO C, et al. Unsupervised multilayer feature learning for satellite image scene classification[J]. IEEE Geoscience and Remote Sensing Letters, 2016, 13(2): 157-161. DOI: 10.1109/LGRS.2015.2503142
|
[5] |
JIN X, JIANG Q, et al. A survey of infrared and visual image fusion methods[J]. Information Fusion, 2017, 85: 478-501.
|
[6] |
BAI X, ZHANG Y, ZHOU F, et al. Quadtree-based multi-focus image fusion using a weighted focus-measure[J]. Information Fusion, 2015, 22: 105-118. DOI: 10.1016/j.inffus.2014.05.003
|
[7] |
BAI X Z. Infrared and visual image fusion through feature extraction by morphological sequential toggle operator[J]. Information Fusion, 2015, 71: 77-86.
|
[8] |
LIU Y, CHEN X, PENG H, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Information Fusion, 2017, 36: 191-207. DOI: 10.1016/j.inffus.2016.12.001
|
[9] |
MA J Y, YU W, LIANG P W, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26. DOI: 10.1016/j.inffus.2018.09.004
|
[10] |
LI H, WU X J. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623. DOI: 10.1109/TIP.2018.2887342
|
[11] |
WANG X, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]//Computer Vision and Pattern Recognition, 2018: 7794-7803.
|
[12] |
ZHANG H, GOODFELLOW I, Metaxas D, et al. Self-attention generative adversarial networks[C]//International Conference on Machine Learning, 2020: 7354-7363.
|
[13] |
杨晓莉, 蔺素珍. 一种注意力机制的多波段图像特征级融合方法[J]. 西安电子科技大学学报, 2020, 47(1): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD202001018.htm
YANG X L, LIN S Z. Method for multi-band image feature-level fusion based on attention mechanism[J]. Journal of Xidian University, 2020, 47(1): 123-130. https://www.cnki.com.cn/Article/CJFDTOTAL-XDKD202001018.htm
|
[14] |
JIAN L, YANG X, LIU Z, et al. A symmetric encoder-decoder with residual block for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-15.
|
[15] |
LOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift [C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning, 2015, 37: 448-456.
|
[16] |
ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general image fusion framework based on convolutional neural network [J]. Information Fusion, 2020, 54: 99-118. DOI: 10.1016/j.inffus.2019.07.011
|
[17] |
YAN H, YU X, et al. Single image depth estimation with normal guided scale invariant deep convolutional fields[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 29(1): 80-92.
|
[18] |
LIU Y, CHEN X, WARD R, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886. DOI: 10.1109/LSP.2016.2618776
|
[19] |
MA J Y, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31: 100-109. DOI: 10.1016/j.inffus.2016.02.001
|
[20] |
MA J Y, XU H, JIANG J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion [J]. IEEE Transactions on Image Processing, 2020, 29: 4980-4995. DOI: 10.1109/TIP.2020.2977573
|
1. |
王敷轩,庞珊. 基于多粒度跨模态特征增强的红外与可见光图像融合. 东莞理工学院学报. 2024(03): 32-37 .
![]() | |
2. |
李立,易诗,刘茜,程兴豪,王铖. 基于密集残差生成对抗网络的红外图像去模糊. 红外技术. 2024(06): 663-671 .
![]() | |
3. |
杨艳春,雷慧云,杨万轩. 基于快速联合双边滤波器和改进PCNN的红外与可见光图像融合. 红外技术. 2024(08): 892-901 .
![]() | |
4. |
陈广秋,温奇璋,尹文卿,段锦,黄丹丹. 用于红外与可见光图像融合的注意力残差密集融合网络. 电子测量与仪器学报. 2023(08): 182-193 .
![]() |