1.大连理工大学 机械工程学院,辽宁 大连 116023
扫 描 看 全 文
WANG Xiaodong, CUI Shipeng, XU Zheng, et al. Visual composite positioning for precision microassembly. [J]. Optics and Precision Engineering 31(19):2857-2866(2023)
WANG Xiaodong, CUI Shipeng, XU Zheng, et al. Visual composite positioning for precision microassembly. [J]. Optics and Precision Engineering 31(19):2857-2866(2023) DOI: 10.37188/OPE.20233119.2857.
基于机器视觉的微小特征定位是精密自动化装配的关键环节,外界干扰和零件本身差异等容易引起视觉引导错误,影响装配成功率,因此提出一种由粗定位与精定位两步组成的复合定位方法。首先通过基于卷积神经网络的目标框检测算法提取感兴趣区域实现粗定位,在此基础上通过轮廓几何特征配准的方式实现零件精定位,算法中还采用自动标注辅助的动态学习机制解决不同批次零件间差异导致定位失败率较高的问题。在自研的装配设备上对该方法进行测试,分析了亮度、离焦和位姿变化对视觉定位算法鲁棒性的影响,并进行了定位精度及小批量装配实验测试。结果表明:本文方法在多种干扰下的装配成功率达到97%,视觉定位的绝对精度与重复精度均优于2 μm,装配精度优于10 μm,能够满足精密微装配对定位算法精度与稳定性的要求。
Microfeature positioning based on machine vision is a crucial aspect of precision automated assembly. External interference and differences in the parts themselves can easily cause visual guidance errors and the success rate of assembly. Therefore, a composite positioning method consisting of rough positioning and fine positioning was proposed. First, the region of interest was extracted through a target-frame-detection algorithm based on a convolutional neural network to achieve rough positioning. Based on this, precise positioning of parts was achieved through contour geometric feature registration. A dynamic learning mechanism assisted by automatic labeling was also adopted in the algorithm to solve the problem of the high positioning failure rate resulting from the difference between the different batches of parts. The method was tested on assembly equipment developed by the research group. The effects of brightness, defocusing, and posture changes on the robustness of visual positioning algorithms were analyzed. Furthermore, positioning accuracy and small-batch assembly experiments were conducted. The results show that the proposed method has good robustness and repeatability with various forms of interference, with an assembly success rate of 97%. Both the absolute accuracy and repetitive accuracy of visual positioning are<2 μm, and assembly accuracy is<10 μm. Therefore, the research results effectively meet the dual requirements of both accuracy and robustness of the positioning algorithm in precision microassembly.
精密微装配机器视觉特征定位目标检测
precision microassemblymachine visionfeature positioningtarget detection
ZHAO X G, HAN L, ZHI J Z, et al. Design a robot system for tomato picking based on YOLO v5[J]. IFAC-Papers OnLine, 2022, 55(3): 166-171. doi: 10.1016/j.ifacol.2022.05.029http://dx.doi.org/10.1016/j.ifacol.2022.05.029
MORU D K, BORRO D. A machine vision algorithm for quality control inspection of gears[J].The International Journal of Advanced Manufacturing Technology, 2020, 106(1/2): 105-123. doi: 10.1007/s00170-019-04426-2http://dx.doi.org/10.1007/s00170-019-04426-2
STURSA D, DOLEZEL P, HONC D. Multiple objects localization using image segmentation with U-net[C]. 2021 23rd International Conference on Process Control (PC).14,2021, Strbske Pleso, Slovakia. IEEE, 2021: 180-185. doi: 10.1109/pc52310.2021.9447488http://dx.doi.org/10.1109/pc52310.2021.9447488
LIU Y W, QU D S, WU X. Positioning for array micro-holes punching[C]. Proceedings of 2011 6th International Forum on Strategic Technology. 2224,2011, Harbin, Heilongjiang. IEEE, 2011: 394-398. doi: 10.1109/ifost.2011.6021048http://dx.doi.org/10.1109/ifost.2011.6021048
WANG Q, ZHOU Q, JING G, et al. Circular saw core localization in the quenching process using machine vision[J]. Optics & Laser Technology, 2023, 161: 109111. doi: 10.1016/j.optlastec.2023.109111http://dx.doi.org/10.1016/j.optlastec.2023.109111
SUN W F, YI J Y, MA G, et al. A vision-based method for dimensional in situ measurement of cooling holes in aero-engines during laser beam drilling process[J].The International Journal of Advanced Manufacturing Technology, 2022, 119(5/6): 3265-3277. doi: 10.1007/s00170-021-08463-8http://dx.doi.org/10.1007/s00170-021-08463-8
CHEN F J, YE X Q, YIN S H, et al. Automated vision positioning system for dicing semiconductor chips using improved template matching method[J].The International Journal of Advanced Manufacturing Technology, 2019, 100(9/10/11/12): 2669-2678. doi: 10.1007/s00170-018-2845-5http://dx.doi.org/10.1007/s00170-018-2845-5
ZHUANG Z M, GUO Z J, YE Y A. Research on video target tracking technology based on improved SIFT algorithm[C]. Proc SPIE 10322, Seventh International Conference on Electronics and Information Engineering, 2017, 10322: 225-229. doi: 10.1117/12.2265460http://dx.doi.org/10.1117/12.2265460
LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110. doi: 10.1023/b:visi.0000029664.99615.94http://dx.doi.org/10.1023/b:visi.0000029664.99615.94
王晓东, 于忠洋, 徐征, 等. 面向批量精密装配的显微特征定位[J]. 光学 精密工程, 2022, 30(11):1353-1361. doi: 10.37188/OPE.20223019.2353http://dx.doi.org/10.37188/OPE.20223019.2353
WANG X D, YU ZH Y, XU ZH, et al. Microscopic feature localization for mass precision assembly tasks[J]. Opt. Precision Eng., 2022, 30(11):1353-1361.(in Chinese). doi: 10.37188/OPE.20223019.2353http://dx.doi.org/10.37188/OPE.20223019.2353
PANEV S, VICENTE F, DE LA TORRE F, et al. Road curb detection and localization with monocular forward-view vehicle camera[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 20(9): 3568-3584. doi: 10.1109/tits.2018.2878652http://dx.doi.org/10.1109/tits.2018.2878652
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).2730,2016, Las Vegas, NV, USA. IEEE, 2016: 779-788. doi: 10.1109/cvpr.2016.91http://dx.doi.org/10.1109/cvpr.2016.91
REDMON J, FARHADI A. YOLOv3: an Incremental Improvement[EB/OL]. 2018: arXiv: 1804.02767. https://arxiv.org/abs/1804.02767.pdfhttps://arxiv.org/abs/1804.02767.pdf. doi: 10.1109/cvpr.2017.690http://dx.doi.org/10.1109/cvpr.2017.690
LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single Shot MultiBox Detector[M]. Computer Vision - ECCV 2016. Cham: Springer International Publishing, 2016: 21-37. doi: 10.1007/978-3-319-46448-0_2http://dx.doi.org/10.1007/978-3-319-46448-0_2
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]. 2014 IEEE Conference on Computer Vision and Pattern Recognition. 2328,2014, Columbus, OH, USA. IEEE, 2014: 580-587. doi: 10.1109/CVPR.2014.81http://dx.doi.org/10.1109/CVPR.2014.81
SHINDE P P, PAI P P, ADIGA S P. Wafer defect localization and classification using deep learning techniques[J]. IEEE Access, 2022, 10: 39969-39974. doi: 10.1109/access.2022.3166512http://dx.doi.org/10.1109/access.2022.3166512
LI Y Z, YAN M D, LIU X. Workpiece intelligent identification and positioning system based on binocular machine vision[C]. 2021 IEEE 9th International Conference on Computer Science and Network Technology (ICCSNT).2224,2021, Dalian, China. IEEE, 2021: 55-58. doi: 10.1109/iccsnt53786.2021.9615453http://dx.doi.org/10.1109/iccsnt53786.2021.9615453
SUZUKI K, YOKOTA Y, KANAZAWA Y, et al. Online self-supervised learning for object picking: detecting optimum grasping position using a metric learning approach[C]. 2020 IEEE/SICE International Symposium on System Integration (SII).1215,2020, Honolulu, HI, USA. IEEE, 2020: 205-212. doi: 10.1109/sii46433.2020.9025845http://dx.doi.org/10.1109/sii46433.2020.9025845
LUO W B. Research on gesture recognition based on YOLOv5[C]. 2022 3rd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE).1517,2022, Xi'an, China. IEEE, 2022: 447-450. doi: 10.1109/icbaie56435.2022.9985931http://dx.doi.org/10.1109/icbaie56435.2022.9985931
OTSU N. A threshold selection method from gray-level histograms[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1979, 9(1): 62-66. doi: 10.1109/tsmc.1979.4310076http://dx.doi.org/10.1109/tsmc.1979.4310076
CANNY J. A computational approach to edge detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1986, PAMI-8(6): 679-698. doi: 10.1109/tpami.1986.4767851http://dx.doi.org/10.1109/tpami.1986.4767851
李郁峰, 陈念年, 张佳成. 一种快速高灵敏度聚焦评价函数[J]. 计算机应用研究, 2010, 27(4):1534-1536. doi: 10.3969/j.issn.1001-3695.2010.04.093http://dx.doi.org/10.3969/j.issn.1001-3695.2010.04.093
LI Y F, CHEN N N, ZHANG J CH. Fast and high sensitivity focusing evaluation function[J]. Application Research of Computers, 2010, 27(4):1534-1536.(in Chinese). doi: 10.3969/j.issn.1001-3695.2010.04.093http://dx.doi.org/10.3969/j.issn.1001-3695.2010.04.093
YAN Z D, CHEN G, XU W Y, et al. Study of an image autofocus method based on power threshold function wavelet reconstruction and a quality evaluation algorithm[J]. Applied Optics, 2018, 57(33): 9714-9721. doi: 10.1364/ao.57.009714http://dx.doi.org/10.1364/ao.57.009714
0
Views
3
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution