结合上下文信息的红外-可见光行人重识别

Infrared-Visible Person Re-Identification Based on Context Information

  • 摘要: 红外-可见光行人重识别任务的目的是匹配同一身份的RGB图像和红外图像。由于两者的成像原理不同,因此难以高效地提取具有鉴别性的模态共享特征。为了解决这一问题,文中提出了模态共享特征增强模块和全局特征增强模块,联合两个模块提取增强全局特征的鉴别性。首先在骨干网络中加入模态共享特征增强模块,结合上下文信息缓解模态信息并强化模态共享特征。其次,在全局特征增强模块中对全局特征进行编码操作,联合损失函数在挖掘模式特征的同时进一步增强全局特征的鉴别性。最后采用互均值学习方式缩小模态差异,约束特征表示。本文在主流数据集上进行了实验,实验结果表明与现有方法相比,精度达到了较高的水准。

     

    Abstract: The purpose of an infrared-visible person re-identification task is to match RGB and infrared images of the same identity. Because of the different imaging principles of the two modalities, it is difficult to efficiently extract discriminative modality-shared features. To address this issue, this study proposes a Modality-shared feature enhancement module and a global feature enhancement module that jointly extract enhanced discriminative global features. First, a modality-shared feature enhancement module is added to the backbone network to alleviate modality information and enhance modality-shared features with contextual information. Second, the global feature enhanced module encodes global features and jointly optimizes the loss function to further enhance the discriminative power of the global features while mining pattern features. Finally, the mutual mean learning method was used to reduce modality differences and constrain the feature representation. Experiments on mainstream datasets show that the proposed method achieves higher accuracy than existing methods.

     

/

返回文章
返回