MA Luyao, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Information Bottleneck Siamese Autoencoder Network[J]. Infrared Technology , 2024, 46(3): 314-324.
Citation: MA Luyao, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Information Bottleneck Siamese Autoencoder Network[J]. Infrared Technology , 2024, 46(3): 314-324.

Infrared and Visible Image Fusion Based on Information Bottleneck Siamese Autoencoder Network

  • Infrared and visible image fusion methods have problems such as insufficient information extraction, feature decoupling, and low interpretability. In order to fully extract and fuse the effective information of the source image, this paper proposes an infrared and visible image fusion method based on information bottleneck siamese autoencoder network (DIBF: Double Information Bottleneck Fusion). This method realizes the disentanglement of complementary features and redundant features by constructing an information bottleneck module on the twin branch. The expression process of complementary information corresponds to the feature fitting process of the first half of the information bottleneck. The compression process of redundant features corresponds to the feature compression process in the second half of the information bottleneck. This method cleverly expresses information extraction and fusion in image fusion as an information bottleneck trade-off problem, and achieves fusion by finding the optimal expression of information. In the information bottleneck module, the network obtains the information weight map of the feature through training, and uses the mean feature to compress the redundant features according to the information weight map. This method promotes the expression of complementary information through the loss function, and the two parts of compression and expression are balanced and optimized simultaneously. In this process, redundant information and complementary information are also decoupled. In the fusion stage, the information weight map is applied in the fusion rules, which improves the information richness of the fused images. Through subjective and objective experiments on the standard TNO dataset, compared with traditional and recent fusion methods, the results show that the method in this paper can effectively fuse useful information in infrared and visible images, and achieved good results on both visual perception and quantitative indicators.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return