Abstract:
In this study, an infrared and visible image fusion using double attention generative adversarial networks(DAGAN) is proposed to address the issue of most infrared and visible light image fusion methods based on GaN using only the attention mechanism in the generator and lacking the attention perception ability in the identification stage. Using DAGAN, a multi-scale attention module that combines spatial and channel attentions in different scale spaces and applies it in the image generation and discrimination stages such that both the generator and discriminator can identify the most discriminative region in the image, was proposed. Simultaneously, an attention loss function that uses the attention map in the discrimination stage to calculate the attention loss and save more target and background information was proposed. The TNO test of a public dataset shows that, compared with the other seven fusion methods, DAGAN has the best visual effect and the highest fusion efficiency.