Abstract:
To address the problems of image fusion in the spatial domain, such as the extraction of different image sources, and challenges in selecting fusion weights, a new spatial-domain image-fusion algorithm is proposed. Using the basic principle of matrix similarity, the infrared image matrix is diagonally transformed and the visible light image matrix is mapped onto the main eigenvectors. Then, the weighted fusion method is used to process the eigenvalue matrix and the fusion matrix is diagonalized as an inverse-transformed and reconstructed fusion image. The experimental results show that the algorithm fully retains the effective information of the source image; moreover, the overall grayscale of the fused image is significantly improved. Thus, the algorithm offers a strong image quality evaluation index and better visual effects.