Research Review of 3D Cameras Based on Time-of-Flight Method
-
摘要: 基于飞行时间法(Time-of-Flight,TOF)的3D相机是一种体积小、误差小、抗干扰能力强、可直接输出深度信息的新型立体成像设备。目前,该类相机已成为测量成像领域的研究热点。本文首先介绍了TOF相机的发展历程及测量原理;随后对TOF相机测量误差来源及类型进行分析;接着将TOF技术与其他主流的三维成像技术进行对比分析;最后对TOF相机的应用与发展趋势进行了阐述。Abstract: The time-of-flight (TOF)-based 3D camera is a new stereo imaging device with a small size, small error, strong anti-interference ability, and direct output of depth information. This type of camera has become a focus of research in the field of measurement and imaging. In this Review, the development history and measurement principle of the TOF camera are introduced. Subsequently, an analysis of the source and type of measurement error of the camera are presented. A comparison of the TOF technology with other mainstream 3D imaging technologies is then provided. Finally, the application and development trends of the TOF camera are described.
-
Keywords:
- time-of-flight method /
- 3D - TOF camera /
- depth information /
- measurement error
-
0. 引言
红外成像具有非接触直接测量输变电设备状态的特征,无需停电,在电力领域具有广泛的应用[1-3]。输变电设备在野外严峻环境中长期运行,由于材料老化、污秽闪络、机械受损等因素,常伴随有局部放电、温度增高等现象。绝缘子作为输电线路重要部件,起到电气隔离和机械支撑的作用,当红外图像显示的温度较高,表明其有异常缺陷,这是因为场强不均匀发生局部放电,严重时可能导致线路故障,甚至造成停电故障。文献[4-5]开展了变压器高压套管的红外诊断研究,通过热像特征谱图分析了故障原因。红外图像可应用于交流滤波器的故障分析中,提取其跳闸的典型故障特征[6-8]。
输变电设备红外图谱特征提取主要为图像处理方法,诸如纹理、色彩、边框等图像特征[9-10]。随着无人机航拍技术的发展,输变电设备红外图谱呈现爆发式增长,深度学习提供了一种良好的手段[11-13],采用卷积神经网络(Convolutional Neural Networks,CNN)的方法,对海量图片进行训练学习,提取特征进行测试和验证。CNN通过卷积层计算,输入较深层的特征图,对于小目标绝缘子、套管,权重值相对较少,无法实现小目标的有效提取[14-15]。针对这一缺点,本文对Faster R-CNN方法进行改进,提高绝缘子红外图谱诊断的精度。
1. 改进的Faster R-CNN方法
1.1 Faster R-CNN原理
区别于普通的CNN,Faster R-CNN增加了一个区域提取网络(Region Proposal Network,RPN),即图 1中的候选区域,摒弃传统的滑动窗口,可在GPU直接运行计算,极大地加快了计算速度。RPN判断每个像素点对应的多个不同尺度和宽高比的锚框是否为前景目标的二分类,形成候选区域。
Faster R-CNN一般采用随机梯度下降法(Stochastic Gradient Descent,SGD)训练神经网络,见式(1):
$$ h(x) = \sum\limits_{i = 0}^n {{w_i}} {x_i} = {\mathit{\boldsymbol{W}}^{\rm{T}}}\mathit{\boldsymbol{X}} $$ (1) 式中:X为输入;W为权重;wi、xi分别表示第i个权重和输入;h(x)为对应的输出。
损失函数S(W)基于平方误差实现,见式(2):
$$ S(\mathit{\boldsymbol{W}}) = \frac{1}{2}\sum\limits_{i = 0}^m {{{({h_W}({x_i}) - {y_i})}^2}} $$ (2) 式中:yi为真实输出。
W的更新函数见式(3):
$$ {W_j} = {W_j} - \alpha \frac{\partial }{{\partial {W_j}}}S(\mathit{\boldsymbol{W}}) $$ (3) 式中:α为学习率,可设置步长。W通过梯度下降法进行求解,首先正向计算样本输出值,接着根据反向传递的误差迭代计算,常用在CNN训练中。
RPN的选择本质是通过平移或者尺度变化的方法将合适的区域提取网络R变成$\hat C$,从而接近实际的候选框C:
$$ f({R_x}, {R_y}, {R_w}, {R_h}) = \left( {{{\hat C}_x}, {{\hat C}_y}, {{\hat C}_w}, {{\hat C}_h}} \right) \approx \left( {{C_x}, {C_y}, {C_w}, {C_h}} \right) $$ (4) 式中:(x, y)、(w, h)分别为矩形区域的中心坐标和宽、高。
令${t_*}$为矩形区域的平移和缩放量,则有:
$$ \left\{ {\begin{array}{*{20}{l}} {{t_x} = ({C_x} - {R_x})/{R_{\rm{w}}}}\\ {{t_y} = ({C_y} - {R_y})/{R_{\rm{h}}}}\\ {{t_{\rm{w}}} = \lg ({C_{\rm{w}}}/{R_{\rm{w}}})}\\ {{t_{\rm{h}}} = \lg ({C_{\rm{h}}}/{R_{\rm{h}}})} \end{array}} \right. $$ (5) 式中:tx、ty为矩形区域的中心坐标平移量;tw、th分别为矩形区域的宽、高的缩放量。
预测值计算过程为:
$$ {d_*}(R) = w_*^{\rm{T}}\phi (R) $$ (6) 式中:ϕ是最后一次卷积计算。
损失函数的目标值计算见式(7),通过调整平移和缩放的尺度,确定最终的候选区域[16]:
$$ {L_{{\rm{oss}}}} = \sum\limits_{i = 1}^N {{{(t_*^i - w_*^{\rm{T}}\phi ({C^i}))}^2}} $$ (7) 1.2 压缩激励结构
为了增强Faster R-CNN的小目标特征提取能力,引入压缩激励结构,即压缩和激励两大操作,设图像的特征参数设置为(H, W, K),分别表示为长、宽和通道数。
压缩操作Fsq(·)基于各个通道实现特征图空间信息的压缩,见式(8):
$$ {h_c} = {F_{{\rm{sq}}}}\left( {{k_c}} \right) = \frac{1}{{H \times W}}\sum\limits_{i = 1}^H {\sum\limits_{j = 1}^W {{k_c}(i, j)} } $$ (8) 式中:kc表示第c个通道;hc表示经过压缩后输出向量h的第c个元素。
激励操作分为激励Fex(·)和校准Fscale(·)两个过程,分别见式(9)和式(10):
$$ s = {F_{{\rm{ex}}}}(h, w) = \sigma (g(z, w)) = \sigma ({w_2}\delta ({w_1}z)) $$ (9) 式中:σ是sigmoid激活函数;w1为$\frac{c}{r} \times \vec C$的实数矩阵,表示通道的缩减,r为缩减因子;δ是ReLU激活函数,w2为$\vec C \times \frac{c}{r}$的实数矩阵,表示通道的恢复。
$$ {\tilde h_c} = {F_{{\rm{scale}}}}({h_c}, {s_c}) = {s_c}.{h_c} $$ (10) 式中:sc表示激活向量s的第c个元素;${\tilde h_c}$表示校准后的对应元素。
2. 绝缘子红外图像的深度学习
2.1 环境搭建
本文基于改进的Faster R-CNN方法,对平台的环境搭建要求较高,具体配置见表 1。操作系统为开源Linux,数据库为MySQL;硬件配置较高,CPU采用Intel高端系列,内存和硬盘容量均较大,保证大量数据的高效运算。框架采用2018年初公开的目标检测平台Detectron,包含最具代表性的目标检测、图像分割、关键点检测算法。
表 1 软硬件配置Table 1. Hardware and software configurationName Model Operating system Ubuntu 16.04.1 Database mysql 5.5.20 CPU Intel Xeon Silver 4114T 12C GPU NVIDIA GTX1080Ti Memory 32 G Hard disk 1 T Frame Detectron 2.2 数据准备
图像数据来源于多条输电线路无人机拍摄的大量绝缘子照片。在神经网络的训练过程中,对正负样本的判定见图 2,主要基于锚框映射图与真实目标框的交并比(Intersection over Union,IoU)来进行计算分类。首先对RPN形成的锚框进行排序筛选形成锚框序列,接着利用边框回归参数向量修正锚框的位置形成候选区域集合,然后计算所有感兴趣区域(Region of Interest,RoI)与真实目标框的IoU,求最大值,并判断其是否大于0.5,若满足,则为正样本,否则为负样本。
完成正负样本的判定后,为了使样本的采样尽量均衡,保证双方的训练集和验证集数量一致,同时采用迁移学习的方法,经过相关修正与补偿,扩充样本总量至2375,样本信息配置见表 2。
表 2 样本配置信息Table 2. Information of sample configurationsample type training set verification set test set total positive 500 250 750 1500 negative 500 250 125 875 total 1000 500 875 2375 2.3 改进模型的建立
普通的CNN方法,原始图像经过卷积层和池化层后,全链接层输出结果,本文方法的结构如图 3所示,引入压缩激励的过程,压缩特征图的空间信息,并通过激励操作学习通道间的依赖关系,可自适应分配每个通道的权重值,提取有利于任务的重要特征通道,最终能进一步增强网络模型的特征提取能力,采用SE-DenseNet-169框架的Faster R-CNN模型。
改进模型主要完成绝缘子异常状态的精准识别,首先对红外原始图像进行相关修正与补偿实现样本扩充,然后采用本文方法进行训练,收敛后,获得最终的改进Faster R-CNN模型。
3. 实验分析
3.1 精确度衡量
CNN学习中,精确度的衡量一般会采用准确率(Precision)和召回率(Recall),其计算过程分别见式(11)和式(12):
$$ {P_{{\rm{re}}}} = \frac{{{\rm{TP}}}}{{{\rm{TP}} + {\rm{FP}}}} $$ (11) $$ {P_{{\rm{ca}}}} = \frac{{{\rm{TP}}}}{{{\rm{TP + FN}}}} $$ (12) 式中:TP表示是实际值和预测值均是异常绝缘子的个数;FP表示预测值是异常绝缘子,实际值却不是的个数;FN表示是实际值是异常绝缘子,预测值却不是的个数。
为进一步衡量改进模型的优劣,这里采用平均检测精度(mean Average Precision,mAP),其中AP等价于召回率和准确率形成曲线与横轴包围的几何图形的面积,对所有类别的AP求平均值即可得到mAP。
3.2 不同方法的对比研究
基于样本数据,开展BP、Faster R-CNN以及本文方法的精确度和效率对比研究,不同方法的实验结果统计见表 3。Faster R-CNN和本文方法在Precision方面均明显优于BP方法,本文方法的Recall最高,mAP也最高,相对于BP提高了近10%,这说明经过改进的Faster R-CNN对于小目标的特征提取具有明显的优势。本文方法通过压缩激励结果,减少了数据量的计算,所以相对于其他方法,有更高的效率。
表 3 不同方法的实验结果统计Table 3. Statistics of experimental results by different methodsName Precision Recall mAP Time/s BP 93.5% 90.4% 80.3% 2.3 Faster R-CNN 98.7% 95.3% 88.7% 1.2 BFEM 99.2% 97.6% 90.2% 0.9 绘制其准确率-召回率关系曲线,如图 4所示,可更加形象直观地反映出本文方法对绝缘子异常特征的提取优势,因为另外两种方法的曲线均被完整的覆盖,说明本文方法改进效果明显。
3.3 不同类型绝缘子对比研究
常见绝缘子排列有单Ⅰ型、双Ⅰ型以及Ⅴ型。本文开展这3类绝缘子的红外图像研究,比较不同排列方式的诊断准确率,如图 5所示。根据电力标准DL/T 664-2008[19],图(b)和图(c)绝缘子端部明显发热,属于异常情况。
不同类型绝缘子的异常诊断准确率见表 4,准确率均较高,均在90%以上;Ⅰ型和Ⅴ型绝缘子的准确率明显优于双Ⅰ型绝缘子,这是因为双Ⅰ型绝缘子会出现两排绝缘子重叠的情况,对红外图像的研究造成一定的影响,为此无人机对于该种类型绝缘子的线路需开展多种角度的拍摄。
表 4 绝缘子异常诊断的准确率Table 4. Accuracy of insulator anomaly diagnosisInsulator type Abnormal total Detected number Accuracy Single Ⅰ 62 61 98.4% Double Ⅰ 47 44 93.6% Ⅴ 31 31 100.0% 4. 结论
本文提出一种改进的Faster R-CNN方法,引入激励压缩环节,搭建训练模型,完成绝缘子红外图像的异常诊断,并成功应用于电力现场运维。本文方法可高效并精准地识别出绝缘子的异常缺陷,mAP达到90.2%。研究结果可为输电线路绝缘子缺陷识别研究提供一定的参考。
-
表 1 三种主流三维成像技术性能对比
Table 1 Performance comparison of three mainstream 3Dimaging technologies
Camera type Binocular vision Structured light TOFcamera Ranging method Passive Active Active Precision medium Medium -high medium Resolution Medium -high Medium low Frame rate low Medium high real-time Medium low high Software complexity high Medium low Power consumption low Medium Adjustable Hardware cost low high Medium -
[1] 李占利, 周康, 牟琦, 等. TOF相机实时高精度深度误差补偿方法[J/OL].红外与激光工程, 2019(12): 253-262. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201912035.htm LI Zhanli, ZHOU Kang, MOU Qi, et al. Real-time high precision depth error compensation method for TOF camera[J/OL]. Infrared and Laser Engineering, 2019(12): 253-262. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201912035.htm
[2] Rice K, Moigne J Le, Jain P. Analyzing range maps data for future space robotics applications[C]//Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Information Technology, 2006, 17: 357-357.
[3] 侯飞.基于飞行时间的三维目标点云提取和重建研究[D].北京: 中国科学院大学(中国科学院国家空间科学中心), 2019. HOU Fei. Research on extraction and reconstruction of three-dimensional target point cloud based on time of flight[D]. Beijing: University of Chinese Academy of Sciences(National Center for space science, Chinese Academy of Sciences), 2019.
[4] Kohoutek, Tobias. Analysis and processing the 3drangeimagedata for robot monitoring[J]. Geodesy and Cartography, 2008, 34(3): 92-96. DOI: 10.3846/1392-1541.2008.34.92-96
[5] Kuehnle J U, Xue Z, Zoellner J M, et al. Grasping in Depth maps of time-of-flight cameras[C]//International Workshop on Robotic & Sensors Environments. IEEE, 2008: 132-137.
[6] Oggier T, Lehmann M, Kaufmann R, et al. An all-solid-state optical range camera for 3D real-time imaging with sub-centimeter depth resolution (SwissRanger)[C]//Optical Design and Engineering. International Society for Optics and Photonics, 2004, 5249: 534-545.
[7] 郭宁博, 陈向宁, 薛俊诗.基于飞行时间法的红外相机研究综述[J].兵器装备工程学报, 2017, 38(3): 152-159. DOI: 10.11809/scbgxb2017.03.035 GUO Ningbo, CHEN Xiangning, XUE Junshi. Review of research on infrared camera based on time of flight[J]. Journal of Ordnance and Equipment Engineering, 2017, 38(3): 152-159. DOI: 10.11809/scbgxb2017.03.035
[8] 丁津津.基于TOF技术的3D相机应用研究综述[C]//中国仪器仪表学会第十二届青年学术会议论文集, 2010: 76-79. DING Jinjin. A review of 3D camera application research based on TOF technology[C]//Proceedings of the 12th Youth Academic Conference of China Instrumentation Society, 2010: 76-79.
[9] 侯飞, 韩丰泽, 李国栋, 等.基于飞行时间的三维成像研究进展和发展趋势[J].导航与控制, 2018, 17(5): 1-7, 48. DOI: 10.3969/j.issn.1674-5558.2018.05.001 HOU Fei, HAN Fengze, LI Guodong, et al. Research Progress and Development Trend of 3D Imaging Based on Time of Flight[J]. Navigation and Control, 2018, 17(5): 1-7, 48. DOI: 10.3969/j.issn.1674-5558.2018.05.001
[10] Kaufmann R, Lehmann M, Schweizer M, et al. A time-of-flight line sensor Development and application[J]. Optical Sensing, 2004, 5459: 192-199. DOI: 10.1117/12.545571
[11] Gupta M, Agrawal A, Ashok Veeraraghavan. A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus[J]. International Journal of Computer Vision, 2013, 102(1-3): 33-55. DOI: 10.1007/s11263-012-0554-3
[12] 段志坚.基于3D-TOF图像传感器采集系统的设计与实现[D].湘潭: 湘潭大学, 2015. DUAN Zhijian. Design and implementation of 3D-TOF image sensor acquisition system[D]. Xiangtan: Xiangtan University, 2015.
[13] 卢纯青, 宋玉志, 武延鹏, 等.基于TOF计算成像的三维信息获取与误差分析[J].红外与激光工程, 2018, 47(10): 160-166. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201810021.htm LU Chunqing, SONG Yuzhi, WU Yanpeng, et al. 3D Information Acquisition and Error Analysis Based on TOF Computational Imaging[J]. Infrared and Laser Engineering, 2018, 47(10): 160-166. https://www.cnki.com.cn/Article/CJFDTOTAL-HWYJ201810021.htm
[14] 王胤.应用于三维成像飞行时间法建模及其误差分析[D].湘潭: 湘潭大学, 2017. WANG Yin. Application of 3D imaging time-of-flight modeling and its error analysis[D]. Xiangtan: Xiangtan University, 2017.
[15] 杨晶晶, 冯文刚.连续调制TOF图像误差来源及降噪处理[J].合肥工业大学学报:自然科学版, 2012, 35(4): 485-488. https://www.cnki.com.cn/Article/CJFDTOTAL-HEFE201204013.htm YANG Jingjing, FENG Wengang. The source of continuous modulation TOF image error and noise reduction processing[J]. Journal of Hefei University of Technology: Natural Science, 2012, 35(4): 485-488. https://www.cnki.com.cn/Article/CJFDTOTAL-HEFE201204013.htm
[16] Payne A D, Dorrington A A, Cree M J. Illumination waveform optimization for time-of-flight range imaging cameras[C]//Videometrics, Range Imaging, and Applications XI. International Society for Optics and Photonics, 2011, 8085: 136-148.
[17] Corti A, Giancola S, Mainetti G, et al. A metrological characterization of the Kinect V2 time-of-flight camera[J]. Robotics and Autonomous Systems, 2016, 75(PB): 584-594.
[18] Rapp H. Faculty for physics and astronomy[D]. Heidelberg: University of Heidelberg, 2007.
[19] Ruocco R, White T, Jarrett D. Systematic errors in active 3D vision sensors[J]. IEEE Proceedings-Vision, Image and Signal Processing, 2003, 150(6): 341-345. DOI: 10.1049/ip-vis:20030794
[20] Jung J, Lee J, Jeong Y, et al. Time-of-Flight Sensor Calibration for a Color and Depth Camera Pair[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(7): 1501-1513. DOI: 10.1109/TPAMI.2014.2363827
[21] Fuchs S, Hirzinger G. Extrinsic and depth calibration of ToF-cameras[C]//IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, 2008: 1-6.
[22] 潘华东, 谢斌, 刘济林.无扫描激光三维成像系统的建模与仿真[J].浙江大学学报:工学版, 2010(4): 21-26, 33. https://www.cnki.com.cn/Article/CJFDTOTAL-ZDZC201004002.htm PAN Huadong, XIE Bin, LIU Jilin. Modeling and Simulation of Scanless Laser 3D Imaging System[J]. Journal of Zhejiang University: Engineering Science, 2010(4): 21-26, 33. https://www.cnki.com.cn/Article/CJFDTOTAL-ZDZC201004002.htm
[23] 梁斌, 何英, 邹瑜, 等. ToF相机在空间非合作目标近距离测量中的应用[J].宇航学报, 2016, 37(9): 1080-1088. DOI: 10.3873/j.issn.1000-1328.2016.09.007 LIANG Bin, HE Ying, ZOU Yu, et al. Application of ToF camera in close measurement of non-cooperative objects in space[J]. Acta Astronautica, 2016, 37(9): 1080-1088. DOI: 10.3873/j.issn.1000-1328.2016.09.007
[24] Opower H. Multiple view geometry in computer vision[J]. Optics and Lasers in Engineering, 2002, 37(1): 85-86. DOI: 10.1016/S0143-8166(01)00145-2
[25] 张鑫, 习俊通.基于随机编码结构光的双目立体三维测量系统[J].机电一体化, 2013, 19(3): 61-65. DOI: 10.3969/j.issn.1007-080x.2013.03.013 ZHANG Xin, XI Juntong. Binocular three-dimensional measurement system based on randomly encoded structured light[J]. Mechatronics, 2013, 19(3): 61-65. DOI: 10.3969/j.issn.1007-080x.2013.03.013
[26] Moreno-Noguer F, Belhumeur P N, Nayar S K. Active refocusing of images and videos[J]. ACM Transactions on Graphics, 2007, 26(99): 67. DOI: 10.1145/1276377.1276461
[27] Prusak A, Melnychuk O, Roth H, et al. Pose estimation and map building with a Time-Of-Flight camera for robot navigation[J]. International Journal of Intelligent Systems Technologies and Applications, 2008, 5(3/4): 355-364. DOI: 10.1504/IJISTA.2008.021298
[28] Alenyà G, Foix S, Torras C. Using ToF and RGBD cameras for 3D robot perception and manipulation in human environments[J]. Intelligent Service Robotics, 2014, 7(4): 211-220. DOI: 10.1007/s11370-014-0159-5
[29] 司秀娟.深度图像处理在精准农业领域的应用研究[D].合肥: 中国科学技术大学, 2017. SI Xiujuan. Application Research of Deep Image Processing in Precision Agriculture[D]. Hefei: University of Science and Technology of China, 2017.
[30] Vázquez-Arellano, Manuel Reiser D, Paraforos D S, et al. 3-D reconstruction of maize plants using a time-of-flight camera[J]. Computers and Electronics in Agriculture, 2018, 145: 235-247 DOI: 10.1016/j.compag.2018.01.002
[31] Penne J, Schaller C, Hornegger J, et al. Robust real-time 3D respiratory motion detection using time-of-flight cameras[J]. International Journal of Computer Assisted Radiology and Surgery, 2008, 3(5): 427-431. DOI: 10.1007/s11548-008-0245-2
[32] Soutschek S, Penne J, Hornegger J, et al. 3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras[C]//IEEE Conf. on Computer Vision & Pattern Recogn., 2008: 1-6.
[33] 雷禧生, 肖昌炎, 蒋仕龙.基于TOF相机的喷涂工件在线三维重建[J].电子测量与仪器学报, 2017, 31(12): 1991-1998. https://www.cnki.com.cn/Article/CJFDTOTAL-DZIY201712018.htm LEI Xisheng, XIAO Changyan, JIANG Shilong. Online 3d reconstruction of sprayed workpiece based on TOF camera[J]. Journal of Electronic Measurement and Instrument, 2017, 31(12): 1991-1998. https://www.cnki.com.cn/Article/CJFDTOTAL-DZIY201712018.htm
[34] Ahmad R, Plapper P. Generation of safe tool-path for 2.5D milling/drilling machine-tool using 3D ToF sensor[J]. CIRP Journal of Manufacturing Science and Technology, 2015, 10: 84-91. DOI: 10.1016/j.cirpj.2015.04.003
[35] 丁津津. TOF三维摄像机的误差分析及补偿方法研究[D].合肥: 合肥工业大学, 2011. DING Jinjin. Error analysis and compensation method of TOF 3d camera[D]. Hefei: Hefei University of Technology, 2011.
[36] 王亚洲.采用TOF面阵传感器与双目视觉融合的三维深度相机设计[D].深圳: 深圳大学, 2017. WANG Yazhou. Design of a 3D Depth Camera Using TOF Area Array Sensor and Binocular Vision[D]. Shenzhen: Shenzhen University, 2017.
[37] 黄舒兰. ToF与立体视觉技术相结合的三维重建方法研究[D].深圳: 中国科学院大学(中国科学院深圳先进技术研究院), 2019. HUANG Shulan. Research on 3D reconstruction method combining ToF and stereo vision technology[D]. Shenzhen: University of Chinese Academy of Sciences (Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences), 2019.
[38] Hullin M B. Computational imaging of light in flight[C]//Proceedings of Optoelectronic Imaging and Multimedia Technology Ⅲ, 2014: 927314.
[39] XIE Jun, Feris Rogerio Schmidt, SUN Mingting. Edge-Guided Single Depth Image Super Resolution[J]. IEEE Transactions on Image Processing: a publication of the IEEE Signal Processing Society, 2016, 25(1): 428-438. DOI: 10.1109/TIP.2015.2501749
[40] SONG X B, DAI Y C, QIN X Y. Deep Depth Super-Resolution: Learning Depth Super-Resolution using Deep Convolutional Neural Network[C]//Asian Conference on Computer Vision, 2017, 10114: 360-376.
[41] Kahlmann T, Oggier T, Lustenberger F, et al. 3D-TOF sensors in the automobile[C]//Proceedings of SPIE the International Society for Optical Engineering, 2005, 5663: 216-224.
[42] 宋玉志, 卢纯青, 王立. 3D-TOF相机在空间近距离目标探测中的应用研究[J].空间控制技术与应用, 2019, 45(1): 53-59. https://www.cnki.com.cn/Article/CJFDTOTAL-KJKZ201901021.htm SONG Yuzhi, LU Chunqing, WANG Li. Research on the application of cameras in the detection of close range objects in space[J]. Aerospace Control and Application, 2019, 45(1): 53-59. https://www.cnki.com.cn/Article/CJFDTOTAL-KJKZ201901021.htm
[43] Nguyen Trong-Nguyen, Huynh Huu-Hung, Meunier Jean. Human gait symmetry assessment using a depth camera and mirrors[J]. Computers in Biology and Medicine, 2018, 101: 174-183. DOI: 10.1016/j.compbiomed.2018.08.021
-
期刊类型引用(8)
1. 李明超,闫宽,张聪,胡记伟,欧锴,陈绪兵. 用于激光软钎焊温度测量的高精度红外辐射测温装置. 红外技术. 2025(01): 108-114 . 本站查看
2. 秦沛强,聂传杰,吝曼卿,卢永雄,张岸勤,何家冰. 磷矿巷道岩爆风险的可视化及特征增强研究. 矿业研究与开发. 2025(02): 123-131 . 百度学术
3. 李贞,魏勇. 基于BP神经网络的红外测温补偿算法研究. 机械制造与自动化. 2023(01): 170-172+176 . 百度学术
4. 曾飞,胡文祥,高彦鑫,宋杰杰. 基于激光扫描的输送带横向跑偏检测系统. 制造业自动化. 2023(05): 21-24 . 百度学术
5. 杨帆,徐军,吴振生,孙明晓,金添. 基于Web端多节点红外热成像传感系统设计. 激光杂志. 2022(02): 154-157 . 百度学术
6. 何翔. 非接触式检测装置综合实验设计. 电子技术与软件工程. 2022(07): 148-151 . 百度学术
7. 谢彬棽. 基于双CAN总线的露天矿带式输送机速度自动化控制方法. 煤矿机械. 2022(12): 214-217 . 百度学术
8. 武存喜. 回转窑焚烧设备退火温度模糊自适应控制技术. 工业加热. 2022(12): 27-31 . 百度学术
其他类型引用(2)