浏览全部资源
扫码关注微信
哈尔滨工业大学 卫星技术研究所,黑龙江 哈尔滨,150001
收稿日期:2017-08-29,
修回日期:2017-09-06,
纸质出版日期:2017-12-31
移动端阅览
宁明峰, 张世杰, 谷蔷薇. 空间二维图像与深度图像稀疏场景流恢复[J]. 光学精密工程, 2017,25(12z): 145-151
NING Ming-feng, ZHANG Shi-jie, GU Qiang-wei. Recovery for sparse scene flow of spatial two-dimensional image and depth image[J]. Editorial Office of Optics and Precision Engineering, 2017,25(12z): 145-151
宁明峰, 张世杰, 谷蔷薇. 空间二维图像与深度图像稀疏场景流恢复[J]. 光学精密工程, 2017,25(12z): 145-151 DOI: 10.3788/OPE.20172514.0145.
NING Ming-feng, ZHANG Shi-jie, GU Qiang-wei. Recovery for sparse scene flow of spatial two-dimensional image and depth image[J]. Editorial Office of Optics and Precision Engineering, 2017,25(12z): 145-151 DOI: 10.3788/OPE.20172514.0145.
针对传统的场景流利用全局能量函数计算稠密光流场的方法计算量大且无法在空间目标追踪、三维重建等实际应用中使用的问题。提出了一种Lucas-Kanade框架下的二维图像与深度图像结合的稀疏场景流计算方法。首先,假设目标平面光流仅受平移运动影响,在二维平面上建立稀疏光流平面运动模型;随后,根据深度相机提供的深度信息,结合平面运动方程,在空间三维建立优化函数,采用最小二乘法求解该优化函数得到稀疏场景流;最后,对本文提出的方法性进行数值仿真验证,结果表明采用本文算法能够对目标表面的稀疏特征进行有效地场景恢复。同时,当目标表面特征小于400时,该算法平均时间小于0.2 s;目标表面特征小于200时,该算法平均时间小于0.1 s;目标表面特征小于50时,该算法平均时间为0.05 s。结果表明该算法运算速度快,能够满足实时运算的需求。
Aimed at traditional scene flow
global energy function was utilized to calculate dense optical flow field
which caused large calculation amount and use for practical applications for failure to track in spatial target and 3D reconstruction etc.
one calculation method for sparse scene flow combined with 2D image and depth image under LK framework was proposed in the Thesis. Firstly
supposing that plane optical flow of target is only just affected by translation motion
plane motion model for sparse optical flow was established on 2D plane; then
according to depth information provided by depth camera
combined with plane motion equation
optimization function was established on spatial 3D
and least square method was adopted to solve this optimization function to get sparse scene flow. Finally
methodology proposed in the Thesis was provided with numerical simulation verification. The result shows that sparse characteristic on target surface can be provided with effective scene recovery by taking advantage of algorithm in the Thesis. At the same time
when target surface characteristic is within 400
average time of this algorithm is less than 0.2 s; when target surface characteristic is less than 200
average time of the algorithm is within 0.1 s; when target surface characteristic is less than 50
average time of the algorithm is 0.05 s. The result shows that arithmetic speed of the algorithm is rapid which can meet demand of real-time operation.
UNDERWOOD C I, RICHARDSON G, SAVIGNOL J. In-orbit results from the SNAP-1 nanosatellite and its future potential[J]. Philosophical Transactions of the Royal Society A:Mathematical, Physical and Engineering Sciences, 2003, 361(1802):199-203.
杜小平, 赵继广, 崔占忠, 等. 基于计算机视觉的航天器间相对状态测量系统[J]. 光学技术, 2003, 29(6):664-666. DU X P, ZHAO J G, CUI ZH ZH, et al.. Optical method for position-attitude determination between spacecrafts based on computer vision[J]. Optical Technique, 2003, 29(6):664-666. (in Chinese)
林来兴, 李灿. 交会对接最后逼近阶段CCD相机的测量方法[J]. 宇航学报, 1994, 15(2):24-34. LIN L X, LI C. Method of measurement in the stage of final approaching for rendezvous and docking in space[J]. Journal of Aeronautics, 1994, 15(2):24-34. (in Chinese)
VEDULA S, BAKER S, RANDER P, et al.. Three-dimensional scene flow[C]. Proceedings of the 7th IEEE International Conference on Computer Vision, IEEE, 1999:722-729.
HUGUET F, DEVERNAY F. A variational method for scene flow estimation from stereo sequences[C]. Proceedings of the IEEE 11th International Conference on Computer Vision, IEEE, 2007:1-7.
BASHA T, MOSES Y, KIRYATI N. Multi-view scene flow estimation:a view centered variational approach[J]. International Journal of Computer Vision, 2013, 101(1):6-21.
VALGAERTS L, BRUHN A, ZIMMER H, et al.. Joint estimation of motion, structure and geometry from stereo sequences[M]. Computer Vision-ECCV 2010. Berlin, Heidelberg:Springer, 2010:568-581.
LI R, SCLAROFF S. Multi-scale 3D scene flow from binocular stereo sequences[J]. Computer Vision and Image Understanding, 2008, 110(1):75-90.
HERRERA C D, KANNALA J, HEIKKIL J. Accurate and practical calibration of a depth and color camera pair[C]. Proceedings of the 14th International Conference, Springer, 2011:437-445.
HADFIELD S, BOWDEN R. Kinecting the dots:particle based scene flow from depth sensors[C]. Proceedings of 2011 IEEE International Conference on Computer Vision (ICCV), IEEE, 2011:2290-2295.
SPIES H, JAHNE B, BARRON J L. Dense range flow from depth and intensity data[C]. Proceedings of the 15th International Conference on Pattern Recognition, IEEE, 2000:131-134.
LETOUZEY A, PETIT B, BOYER E. Scene flow from depth and color images[C]. BMVC 2011-British Machine Vision Conference, BMVA Press, 2011:1-11.
VOGEL C, SCHINDLER K, ROTH S. 3D scene flow estimation with a rigid motion prior[C]. Proceedings of 2011 IEEE International Conference on Computer Vision, IEEE, 2011:1291-1298.
于起峰, 尚洋. 摄像测量学原理与应用研究[M]. 北京:科学出版社, 2009. YU Q F, SHANG Y. Videometrics:Principles and Researches[M]. Beijing:Science Press, 2009. (in Chinese)
BRUHN A, WEICKERT J, SCHNRR C. Lucas/Kanade meets Horn/Schunck:combining local and global optic flow methods[J]. International Journal of Computer Vision, 2005, 61(3):211-231.
ANTONAKOS E, ALABORT-I-MEDINA J, TZIMIROPOULOS G, et al.. Feature-based Lucas-Kanade and active appearance models[J]. IEEE Transactions on Image Processing, 2015, 24(9):2617-2632.
0
浏览量
440
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构