Jun FENG, Yu-yu YAN, Yan ZHAO, et al. A terracotta image partition matching method based on learned invariant feature transform[J]. Optics and precision engineering, 2018, 26(7): 1774-1783.
DOI:
Jun FENG, Yu-yu YAN, Yan ZHAO, et al. A terracotta image partition matching method based on learned invariant feature transform[J]. Optics and precision engineering, 2018, 26(7): 1774-1783. DOI: 10.3788/OPE.20182607.1774.
A terracotta image partition matching method based on learned invariant feature transform
A novel feature partition matching scheme for two-view Terracotta warrior images was presented to address the problem of high false matching rate and low feature matching efficiency during 3D reconstruction in this paper. The new scheme was as follows:First
the features of the complete Terracotta warriors image were extracted using the learned invariant feature transform (LIFT) method. Second
the position of the dividing line on the head of the image of the warrior was determined by applying the proposed prior knowledge-based feature point distribution curve
and the extracted features were then divided into head and torso features based on the dividing line. Third
the Euclidean distance was used to perform the regional feature matching
and the random sample consensus (RANSAC) algorithm was subsequently used to filter out the mismatched point set from the matched result set. Experimental results show that in the terracotta image feature extraction and matching
the correct matching rate of the new scheme can reach 98%; the correct matching rate is increased by approximately 20% compared with those of the SIFT and SURF methods
and the repeat rate of the feature points is increased by 10% while the iteration time of RANSAC is decreased by 50%. The new scheme also has better robustness when scale
illumination
and angle are changed in the images. Therefore
the proposed scheme can achieve correct matching of the feature points with sufficient accuracy and has applications in the robust 3D reconstruction of the Terracotta warrior images.
ZHANG J. Study and application of three-dimensional reconstruction technology of site scene based on laser scanning data [D]. Northwest University, 2012. (in Chinese)
PENG Y, YAO X W, HU Y J. The application research of 3D laser scanning technology on rockyhistorical relics conservation[J]. Urban Geotechnical Investigation & Surveying, 2016(3):97-100.(in Chinese)
JIA Q, GAO X K, LUO ZH X, et al.. Feature points matching based on geometric constraints[J].Journal of Computer-Aided Design & Computer Graphics, 2015, 27(8):1388-1397. (in Chinese)
LOWE D G, LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2):91-110.
BAY H, ESS A, TUYTELAARS T, et al.. Speeded-up robust features[J]. Computer Vision & Image Understanding, 2008, 110(3):404-417.
TOLA E, LEPETIT V, FUA P. A fast local descriptor for dense matching[C]. Computer Vision and Pattern Recognition , 2008. CVPR 2008. IEEE Conference on. IEEE , 2008: 1-8.
RUBLEE E, RABAUD V, KONOLIGE K, et al . . ORB: An efficient alternative to SIFT or SURF[J]. 2011, 58(11): 2564-2571.
MAINALI P, LAFRUIT G, TACK K, et al.. Derivative-based scale invariant image feature detector with error resilience[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2014, 23(5):2380.
ZHANG Y GENG G H.Feature points matching of terracotta warrior images based on condition number constraint[J]. Journal of Yunnan University(Natural Sciences Edition), 2017, 39(4):547-553. (in Chinese)
FENG H W, ZHOU Y P, FENG J, et al..Estimation of fundamental matrix from multi-prespective views for 3D reconstruction[J]. Opt. Precision Eng., 2016, 24(10s):567-574.
ZHOU M. Study on Application of Qin's Terra Cotta Warriors Image in Modern Animation Design [D]. Xi'an University of Architecture and Technology, 2013. (in Chinese)
YI K M, TRULLS E, LEPETIT V, et al . . LIFT: learned invariant feature transform[J]. 2016: 467-483.
HE K, ZHANG X, REN S, et al . . Deep residual learning for image recognition[C]. Computer Vision and Pattern Recognition. IEEE , 2016: 770-778.
QU Y F, LIU Z Y, JIANG Y Q, et al.. Self-adaptative variable-metric feature point extraction method[J].Opt. Precision Eng., 2017, 25(1):188-197.(in Chinese)
LI D, ZHU L L, HU Y S.Fast matching algorithm of multi-view feature points based on minimal spanning tree[J]. Journal of Huazhong University of Science and Technology(Nature Science Edition), 2017, 45(1):41-45.(in Chinese)
GUO H Z, GUO L H, LV Y. Target local feature extraction combined MSER and HSOG[J]. Chinese Journal of Liquid Crystals and Displays, 2016, 31(11):1070-1078.(in Chinese)
COSTANZO A, AMERINI I, CALDELLI R, et al.. Forensic analysis of SIFT keypoint removal and injection[J]. IEEE Transactions on Information Forensics & Security, 2014, 9(9):1450-1464.
HOSSEIN-NEJAD Z, NASRI M. An adaptive image registration method based on SIFT features and RANSAC transform[J]. Computers & Electrical Engineering, 2016.