Abstract:
To solve problems in traditional image fusion, such as dim targets, low contrast, and loss of edge and textural details in fusion results, a new fusion approach for infrared and low-level visible light image fusion based on perception unified color space (PUCS) and dual tree complex wavelet transform (DTCWT) is proposed. First, the two-source image intensity component is separately transformed from RGB space into PUCS to obtain a new intensity component for further processing. Then, the infrared and low-level visible light images are decomposed using DTCWT to obtain the low- and high-frequency components, respectively. Subsequently, at the fusion stage, the region energy adaptive weighted method is adopted to fuse the low-frequency sub-bands, and the high-frequency rule uses the sum modified Laplacian and gradient value vector for different scale and directional sub-bands fusions. Finally, the fusion image is obtained by applying inverse DTCWT on the sub-bands and returned to RGB space. The proposed algorithm was compared with three efficient fusion methods in different scenarios. The experimental results show that this approach can achieve prominent target characteristics, clear background texture and edge details, and suitable contrast in subjective evaluations as well as advantages in eight objective indicator evaluations.