融合手部多模态生物特征的身份识别方法

Multimodal Biometric Identity-Recognition Method Based on Fused Hand Features

  • 摘要: 近年来,生物特征识别技术取得了快速的发展,从单一模态下身份识别逐渐演变为融合多模态特征的身份识别。其中基于掌纹和掌静脉的身份识别技术是一大研究热点,如何实现实时非接触式手掌识别仍然存在挑战。本文中我们采用双目相机同时获取可见光和近红外光手掌图像,基于手掌关键点检测来定位感兴趣区域,并设计了一个融合Log-Gabor卷积的掌纹和掌静脉网络(Log-Gabor-convolution Palmprint and Vein Network, LogPVNet),该网络采用双支路并行特征提取结构,并设计了参数自适应Log-Gabor卷积以及多感受野特征融合模块,显著提升了双模态图像纹理特征的提取能力。在CASIA-PV和TJU-PV两个公开的掌纹和掌静脉数据集和自建数据集——SWUST-PV上进行方法测试,实验结果表明:所提出的方法在保证识别精度达到99.9%以上,等误率值低至0.0012%或以下的情况下,模型参数量与基础模型相比降低76%,浮点计算量降低81%,实现了模型的轻量化。

     

    Abstract: In recent years, biometric recognition technology has developed rapidly from single-mode identification to multimodal feature fusion. Identity recognition technology based on palm prints and veins is investigated extensively, and challenges persist in achieving real-time noncontact palm recognition. In this study, we use a binocular camera to simultaneously capture visible and near-infrared palm images, locate the region of interest based on palm key-point detection, and design a log-Gabor-convolution palm print and vein network. The network adopts a dual-branch parallel feature-extraction structure and designs a parameter-adaptive log Gabor convolution and multi-receptive-field feature-fusion module, thus significantly improving the ability to extract texture features from dual-mode images. Method testing was conducted on two publicly available palm-print and palm-vein datasets, i.e., CASIA-PV and TJU-PV, respectively, as well as on a custom-developed dataset, SWUST-PV. Experimental results show that the proposed method achieved a recognition accuracy exceeding 99.9%, with an error rate of 0.0012% or less. Compared with the basic model, the proposed model is lightweight and decreases the model parameter count and floating-point computational complexity by 76% and 81%, respectively.

     

/

返回文章
返回