Underwater Evidence Detection Method Based on Polarization Fusion Image
-
摘要: 偏振探测技术能够在复杂的背景环境中凸显出目标,为我们提供了更为清晰和精准的目标识别能力。然而,在法庭科学领域上,利用偏振成像技术对水下物证进行探测搜寻的研究仍属空白。针对这一问题,本文通过偏振成像装置,对目标强度图像和偏振度图像进行融合。利用非下采样剪切波(non-subsampled shearlet transform, NSST)对图像进行分解后,在高频子带提出了参数自适应的简化型脉冲耦合神经网络模型,在低频子带则采用一种基于区域能量的自适应加权融合规则。在可见光下,对3类典型目标进行相关算法比对实验。实验结果表明,通过偏振成像技术可有效探测到水下物证,利用本文提出的图像融合算法有效突出了水下物证的细节特征,验证了偏振探测技术对水下物证成像的有效性,有利于突破当下法庭科学领域水下物证探测技术的空白。Abstract: Polarization detection technology can highlight targets in complex background environments, providing us with clearer and more accurate target recognition. However, research on the use of polarization imaging in courtroom science, regarding the detection and search for underwater evidence, is lacking. To address this issue, this study fuses the polarization and target intensity images using a polarization imaging device. After decomposing the images using non-subsampled shear waves (NSST) into low and high-frequency sub-bands, a simplified impulse-coupled neural network model with adaptive parameters is proposed for the high-frequency sub-band, and an adaptive weighting fusion rule, based on region energy, is used for the low-frequency sub-band. Correlation algorithm comparison experiments were conducted for three typical targets at visible wavelengths. The experimental results show that underwater evidence can be effectively detected using polarization imaging technology. The image fusion algorithm proposed in this paper effectively highlights the detailed features of underwater evidence, verifying the effectiveness of polarization detection technology for underwater evidence imaging, which is conducive to breaking through the existing research gap in the field of courtroom science.
-
Keywords:
- polarization imaging /
- image fusion /
- forensic science /
- underwater evidence /
- SPCNN
-
-
表 1 比对算法来源
Table 1 Comparison of algorithmic sources
Method Methods of fusion Method 1 Low frequency: take the mean
High frequency: absolute values are taken to be largeMethod 2 Low frequency: localized area energy is taken to be large
High frequency: localized area energy is taken to be largeMethod 3 Low frequency: local area spatial frequency taken large
High frequency: local area spatial frequency taken largeMethod 4 Based on the multiresolution singular value decomposition (MSVD) method[18] Method 5 Multiscale wavelet method[19] Method 6 A fusion method based on bootstrap filtering[20] Method 7 Combined visual saliency map (VSM) and least squares approach[21] Method 8 Methods based on latent low rank representation (LatLRR) [22] Method 9 A pixel and intra-area based double counting complex wavelet (DTCWT) approach[23] Method 10 Optimization method based on gradient transfer fusion (GTF)[24] Method 11 Measuring local entropy based on the PCNN model [25] Method 12 Methods in this paper 表 2 第一组实验各算法融合图像的评价指标值表
Table 2 Evaluation value of experiment 1
Evaluation index Method 1 Method 2 Method 3 Method 4 Method 5 Method 6 SD 56.6778 44.3967 61.1142 45.0537 44.2578 57.4242 EN 6.7723 6.2663 6.8461 6.7132 6.3059 7.3089 Qabf 0.5202 0.3804 0.5115 0.1963 0.3671 0.4375 MI 3.3614 3.2694 3.4411 2.1876 3.5395 3.7182 Method 7 Method 8 Method 9 Method 10 Method 11 Ours SD 71.0913 57.8436 52.864 72.6199 60.6569 71.8362 EN 6.8551 6.8714 6.9262 6.3219 6.4077 7.3065 Qabf 0.616 0.5863 0.3488 0.5945 0.672 0.6876 MI 3.6226 3.1841 3.2632 2.8624 4.0556 4.1478 表 3 第二组实验各算法融合图像的评价指标值
Table 3 Evaluation value of experiment 2
Evaluation index Method 1 Method 2 Method 3 Method 4 Method 5 Method 6 SD 24.603 15.8382 27.7079 19.9809 15.7362 28.43 EN 5.9118 5.3973 6.1907 5.7575 5.4024 6.4786 Qabf 0.7552 0.3946 0.7525 0.4961 0.3708 0.3936 MI 1.6614 3.3193 2.1569 1.4321 3.3528 3.5814 Method 7 Method 8 Method 9 Method 10 Method 11 Ours SD 25.682 21.8504 24.4234 28.1476 26.8165 29.0425 EN 5.9289 5.7622 5.9289 6.0147 5.796 6.6484 Qabf 0.7664 0.7017 0.7491 0.686 0.693 0.7548 MI 1.8397 2.3353 1.6062 2.0746 3.5282 4.1863 表 4 第三组实验各算法融合图像的评价指标值
Table 4 Evaluation value of experiment 3
Evaluation index Method 1 Method 2 Method 3 Method 4 Method 5 Method 6 SD 47.4696 37.8272 66.4415 39.187 37.7084 51.4002 EN 7.2589 6.8949 7.6891 7.0276 6.9039 7.5925 Qabf 0.472 0.3749 0.4455 0.27 0.3707 0.3161 MI 2.8271 3.8016 3.3755 2.846 3.8673 2.0153 Method 7 Method 8 Method 9 Method 10 Method 11 Ours SD 51.4296 46.4801 45.9618 65.3107 63.6552 66.9828 EN 7.4264 7.2903 7.2644 7.6795 6.9369 7.8523 Qabf 0.4537 0.4673 0.4369 0.5242 0.3981 0.5846 MI 3.041 3.0871 2.5922 3.2463 3.98 4.2387 -
[1] 孙传东, 陈良益, 高立民, 等. 水的光学特性及其对水下成像的影响[J]. 应用光学, 2000, 21(4): 39-46. https://www.cnki.com.cn/Article/CJFDTOTAL-YYGX200004011.htm SUN Chuandong, CHEN Liangyi, GAO Limin, et al. Optical properties of water and their effects on underwater imaging[J]. Applied Optics, 2000, 21(4): 39-46. https://www.cnki.com.cn/Article/CJFDTOTAL-YYGX200004011.htm
[2] Schettini R, Corchs S. Underwater image processing: state of the art of restoration and image enhancement methods[J]. EURASIP Journal on Advances in Signal Processing, 2010, 2010(1): 1-14.
[3] Cronin T W, Marshall J. Patterns and properties of polarized light in air and water[J]. Philosophical Transactions of the Royal Society B: Biological Sciences, 2011, 366(1565): 619-626. DOI: 10.1098/rstb.2010.0201
[4] 李代林, 于洋, 李贵雷, 等. 水下材质识别技术的研究[J]. 激光与光电子学进展, 2018, 55(7): 071010. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ201807025.htm LI Dailin, YU Yang, LI Guilei, et al. Research on underwater material recognition technology[J]. Advances in Lasers and Optoelectronics, 2018, 55(7): 071010. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ201807025.htm
[5] Mcglamery B L. A computer model for underwater camera systems[J]. Proc. of SPIE, 1980, 208(208): 221-231.
[6] HE Kaiming, SUN Jian, Tang Xiaoou. Single image haze removal using dark channel prior[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2009: 1956-1963, Doi: http://dx.doi.org/10.1109/CVPR.2009.5206515.
[7] Galdran A, Vazquez-Corral J, Pardo D, et al. A variational frame work for single image dehazing[C]//European Conference on Computer Vision, 2014, Doi: http://dx.doi.org/10.1007/978-3-319-16199-0_18.
[8] LI C Y, GUO J C, CONG R M, et al. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior[J]. IEEE Transactions on Image Processing, 2019, 26(12): 5664-5677.
[9] SHEN L, Reda M, ZHAO Y. Image-matching enhancement using a polarized intensity-hue-saturation fusion method[J]. Applied Optics, 2021, 60(13): 3699-3715. DOI: 10.1364/AO.419726
[10] WEI Y, HAN P, LIU F, et al. Enhancement of underwater vision by fully exploiting the polarization information from the stokes vector[J]. Optics Express, 2021, 29(14): 22275-22287. DOI: 10.1364/OE.433072
[11] HAN P L, FEI L, ZHANG G, et al. Multi-scale analysis method of underwater polarization imaging[J]. Acta Physica Sinica: Chinese Edition, 2018, 67(5): 054202. DOI: 10.7498/aps.67.20172009
[12] LIU F, ZHANG S C, HAN P L, et al. Depolarization index from Mueller matrix descatters imaging in turbid water[J]. Chinese Optics Letters, 2022, 20(2): 022601. DOI: 10.3788/COL202220.022601
[13] HUANG B, LIU T, HU H, et al. Underwater image recovery considering polarization effects of objects[J]. Optics Express, 2016, 24(9): 9826-9838. DOI: 10.1364/OE.24.009826
[14] WU H, ZHAO M, LI F, et al. Underwater polarization-based single pixel imaging[J/OL]. J. Soc. Inf. Display., 2020; 28: 157-163. https://doi.org/10.1002/jsid.838
[15] 陈雄锋, 阮驰. 多参数最优重构水下偏振成像复原方法[J/OL]. 兵工学报: 1-11 [2022-09-14]. http://kns.cnki.net/kcms/detail/11.2176.TJ.20220708.0845.004.html. CHEN Xiongfeng, RUAN Chi. Multi-parameter optimal reconstruction of underwater polarization imaging recovery method[J/OL]. Journal of Military Engineering: 1-11 [2022-09-14]. http://kns.cnki.net/kcms/detail/11.2176.tj.20220708.0845.004.html
[16] 于津强, 段锦, 陈伟民, 等. 基于NSST与自适应SPCNN的水下偏振图像融合[J]. 激光与光电子学进展, 2020, 57(6): 103-113. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202006010.htm YU Jinqiang, DUAN Jin, CHEN Weimin, et al. Underwater polarization image fusion based on NSST and adaptive SPCNN[J]. Advances in Lasers and Optoelectronics, 2020, 57(6): 103-113. https://www.cnki.com.cn/Article/CJFDTOTAL-JGDJ202006010.htm
[17] 江平, 张强, 李静, 等. 基于NSST和自适应PCNN的图像融合算法[J]. 激光与红外, 2014, 44(1): 108-113. https://www.cnki.com.cn/Article/CJFDTOTAL-JGHW201401024.htm JIANG Ping, ZHANG Qiang, LI Jing, et al. Image fusion algorithm based on NSST and adaptive PCNN[J]. Laser and Infrared, 2014, 44(1): 108-113. https://www.cnki.com.cn/Article/CJFDTOTAL-JGHW201401024.htm
[18] Minato S, Matsuoka T, Tsuji T. Singular-value decomposition analysis of source illumination in seismic interferometry by multidimensional deconvolution[J]. Geophysics, 2013, 78(3): Q25-Q34. DOI: 10.1190/geo2012-0245.1
[19] 王利杰, 赵海丽, 祝勇, 等. 基于多尺度变换的水下偏振图像融合研究[J]. 应用激光, 2018, 38(5): 842-846. https://www.cnki.com.cn/Article/CJFDTOTAL-YYJG201805024.htm WANG Lijie, ZHAO Haili, ZHU Yong, et al. Research on underwater polarization image fusion based on multi-scale transform[J]. Applied Laser, 2018, 38(5): 842-846. https://www.cnki.com.cn/Article/CJFDTOTAL-YYJG201805024.htm
[20] ZHOU Z, DONG M, XIE X, et al. Fusion of infrared and visible images for night-vision context enhancement[J]. Applied Optics, 2016, 55(23): 6480-6490. DOI: 10.1364/AO.55.006480
[21] MA J L, ZHOU Z Q, WANG B, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82(5): 8-17.
[22] LI H, WU X J. Infrared and visible image fusion using latent low-rank representation[J/OL]. [2018-12-18], [2019-07-03]. https://arxiv.org/abs/1804.08992.
[23] XU Hui, YUAN Yihui, CHANG Benkang, et al. Image fusion based on complex wavelets and region segmentation[C]//2010 International Conference on Computer Application and System Modeling (ICCASM), 2010, 8: 135-138, Doi: 10.1109/ICCASM.2010.5619112.
[24] MA J, CHEN C, LI C, et al. Infrared and visible image fusion via gradient transfer and total variation minimization[J]. Information Fusion, 2016, 31(9): 100-109
[25] CAI M R, YANG J Y, CAI G H. Multi-focus image fusion algorithm using LP transformation and PCNN[C]//IEEE International Conference on Software Engineering & Service Science of IEEE, 2015: 261-265.
[26] 陈广秋. 基于多尺度分析的多传感器图像融合技术研究[D]. 长春: 吉林大学, 2015. CHEN Guangqiu. Research on Multi-sensor Image Fusion Technology Based on Multi-scale Analysis[D]. Changchun: Jilin University, 2015.
[27] QU G, ZHANG D, YAN P. Information measure for performance of image fusion[J]. Electronics Letters, 2002, 38(7): 313-315.
[28] HONG R. Objective image fusion performance measure[J]. Military Technical Courier, 2000, 56(2): 181-193.