红外与可见光图像多特征自适应融合方法

Multi-Feature Adaptive Fusion Method for Infrared and Visible Images

  • 摘要: 由于成像机理不同,红外图像以像素分布表征典型目标,而可见光图像以边缘和梯度描述纹理细节,现有的融合方法不能依据源图像特征自适应变化,造成融合结果不能同时保留红外目标特征与可见光纹理细节。为此,本文提出红外与可见光图像多特征自适应融合方法。首先,构建了多尺度密集连接网络,可以有效聚合所有不同尺度不同层级的中间特征,利于增强特征提取和特征重构能力。其次,设计了多特征自适应损失函数,采用VGG-16网络提取源图像的多尺度特征,以像素强度和梯度为测量准则,以特征保留度计算特征权重系数。多特征自适应损失函数监督网络训练,可以均衡提取源图像各自的特征信息,从而获得更优的融合效果。公开数据集的实验结果表明,该方法在主、客观评价方面均优于其他典型方法。

     

    Abstract: Owing to different imaging mechanisms, infrared images represent typical targets by pixel distribution, whereas visible images describe texture details through edges and gradients. Existing fusion methods fail to adaptively change according to the characteristics of the source images, thereby resulting in fusion results that do not retain infrared target features and visible texture details simultaneously. Therefore, a multi-feature adaptive fusion method for infrared and visible images is proposed in this study. First, a multi-scale dense connection network that can effectively reuse all intermediate features of different scales and levels, and further enhance the ability of feature extraction and reconstruction is constructed. Second, a multi-feature adaptive loss function is designed. Using the pixel intensity and gradient as measurement criteria, the multi-scale features of the source image are extracted by the VGG-16 network and the feature weight coefficients are calculated by the degree of information preservation. The multi-feature adaptive loss function can supervise network training and evenly extract the respective feature information of the source image to obtain a better fusion effect. The experimental results of public datasets demonstrate that the proposed method is superior to other typical methods in terms of subjective and objective evaluations.

     

/

返回文章
返回