基于PIE和CGAN的无人农机红外与可见光图像融合

Infrared and Visible Image Fusion of Unmanned Agricultural Machinery Based on PIE and CGAN

  • 摘要: 为了使无人农机在复杂环境的生产过程中及时感知环境信息,避免安全事故发生,本文提出了一种PIE(Poisson Image Editing)和CGAN(Conditional Generative Adversarial Networks)相结合的红外与可见光图像融合算法。首先,利用红外图像及其对应的红外图像显著区域对CGAN网络进行训练;然后,将红外图像输入训练好的网络,即可得到显著区域掩膜;在对其进行形态学优化后进行基于PIE的图像融合;最后,对融合结果进行增强对比度处理。该算法可以实现快速图像融合,满足无人农机实时感知环境的需求,并且该算法保留了可见光图像的细节信息,又能突出红外图像中行人和动物等重要信息,在标准差、信息熵等客观指标上表现良好。

     

    Abstract: In this study, we proposed an infrared and visible image fusion algorithm that combines PIE and CGAN to make unmanned agricultural machinery perceive environmental information promptly and avoid accidents during production in complex environments. First, we trained the CGAN using an infrared image and corresponding saliency regions. The infrared image is input into the trained network to obtain the saliency region mask. After morphological optimization, we performed image fusion based on the PIE. Finally, we enhanced the fusion results by contrast processing. This algorithm can realize fast image fusion and satisfy the requirements for real-time environmental perception of unmanned agricultural machines. In addition, the algorithm retains the details of visible images and highlights important information concerning humans and animals in infrared images. It performs well in standard deviation and information entropy.

     

/

返回文章
返回