凤瑞, 袁宏武, 周玉叶, 王峰. 基于双支路拮抗网络的偏振方向图像融合方法[J]. 红外技术, 2024, 46(3): 288-294.
引用本文: 凤瑞, 袁宏武, 周玉叶, 王峰. 基于双支路拮抗网络的偏振方向图像融合方法[J]. 红外技术, 2024, 46(3): 288-294.
FENG Rui, YUAN Hongwu, ZHOU Yuye, WANG Feng. Fusion Method for Polarization Direction Image Based on Double-branch Antagonism Network[J]. Infrared Technology , 2024, 46(3): 288-294.
Citation: FENG Rui, YUAN Hongwu, ZHOU Yuye, WANG Feng. Fusion Method for Polarization Direction Image Based on Double-branch Antagonism Network[J]. Infrared Technology , 2024, 46(3): 288-294.

基于双支路拮抗网络的偏振方向图像融合方法

Fusion Method for Polarization Direction Image Based on Double-branch Antagonism Network

  • 摘要: 为了提升偏振方向图像融合效果,构建了一种偏振方向图像的双支路拮抗融合网络(Double-branch Antagonism Network, DANet),该网络主要包括特征提取、特征融合和特征转化3个模块。首先,特征提取模块由低频支路和高频支路组成,将0°、45°、90°和135°偏振方向图像连接输入到低频支路,提取图像能量特征,将2组拮抗图像差分输入到高频支路,提取图像细节特征;其次,将得到的能量特征和细节特征进行特征融合;最后,将融合后的特征转化整合为融合图像。实验表明,通过此网络得到的融合图像,其视觉效果和评价指标均有较为显著的提升,与合成强度图像I、偏振拮抗图像SdSddShSv相比,在平均梯度、信息熵、空间频率和图像灰度均值上,分别至少提升了22.16%、9.23%、23.44%和38.71%。

     

    Abstract: To improve the quality of the fused image, the study presents a double-branch antagonism network (DANet) for the polarization direction images. The network includes three main modules: feature extraction, fusion, and transformation. First, the feature extraction module incorporates low and high-frequency branches, and the polarization direction images of 0°, 45°, 90°, and 135° are concatenated and imported to the low-frequency branch to extract energy features. Two sets of polarization antagonism images (0°, 90°, 45°, and 135°) are subtracted and entered into the high-frequency branch to extract detailed features and energy. Detailed features are fused to feature maps. Finally, the feature maps were transformed into fused images. Experiment results show that the fusion images obtained by DANet make obvious progress in visual effects and evaluation metrics, compared with the composite intensity image I, polarization antagonistic image Sd, Sdd, Sh, and Sv, the average gradient, information entropy, spatial frequency, and mean gray value of the image are increased by at least 22.16%, 9.23%, 23.44% and 38.71%, respectively.

     

/

返回文章
返回