基于信息瓶颈孪生自编码网络的红外与可见光图像融合

Infrared and Visible Image Fusion Based on Information Bottleneck Siamese Autoencoder Network

  • 摘要: 红外与可见光图像融合方法中存在信息提取和特征解耦不充分、可解释性较低等问题,为了充分提取并融合源图像有效信息,本文提出了一种基于信息瓶颈孪生自编码网络的红外与可见光图像融合方法(DIBF:Double Information Bottleneck Fusion)。该方法通过在孪生分支上构建信息瓶颈模块实现互补特征与冗余特征的解耦,进而将互补信息的表达过程对应于信息瓶颈前半部分的特征拟合过程,将冗余特征的压缩过程对应于信息瓶颈后半部分的特征压缩过程,巧妙地将图像融合中信息提取与融合表述为信息瓶颈权衡问题,通过寻找信息最优表达来实现融合。在信息瓶颈模块中,网络通过训练得到特征的信息权重图,并依据信息权重图,使用均值特征对冗余特征进行压缩,同时通过损失函数促进互补信息的表达,压缩与表达两部分权衡优化同步进行,冗余信息和互补信息也在此过程中得到解耦。在融合阶段,将信息权重图应用在融合规则中,提高了融合图像的信息丰富性。通过在标准图像TNO数据集上进行主客观实验,与传统和近来融合方法进行比较分析,结果显示本文方法能有效融合红外与可见光图像中的有用信息,在视觉感知和定量指标上均取得较好的效果。

     

    Abstract: Infrared and visible image fusion methods have problems such as insufficient information extraction, feature decoupling, and low interpretability. In order to fully extract and fuse the effective information of the source image, this paper proposes an infrared and visible image fusion method based on information bottleneck siamese autoencoder network (DIBF: Double Information Bottleneck Fusion). This method realizes the disentanglement of complementary features and redundant features by constructing an information bottleneck module on the twin branch. The expression process of complementary information corresponds to the feature fitting process of the first half of the information bottleneck. The compression process of redundant features corresponds to the feature compression process in the second half of the information bottleneck. This method cleverly expresses information extraction and fusion in image fusion as an information bottleneck trade-off problem, and achieves fusion by finding the optimal expression of information. In the information bottleneck module, the network obtains the information weight map of the feature through training, and uses the mean feature to compress the redundant features according to the information weight map. This method promotes the expression of complementary information through the loss function, and the two parts of compression and expression are balanced and optimized simultaneously. In this process, redundant information and complementary information are also decoupled. In the fusion stage, the information weight map is applied in the fusion rules, which improves the information richness of the fused images. Through subjective and objective experiments on the standard TNO dataset, compared with traditional and recent fusion methods, the results show that the method in this paper can effectively fuse useful information in infrared and visible images, and achieved good results on both visual perception and quantitative indicators.

     

/

返回文章
返回