浏览全部资源
扫码关注微信
济南大学信息科学与工程学院山东省网络环境智能计算技术重点实验室, 山东 济南 250022
收稿日期:2015-06-05,
修回日期:2015-06-21,
纸质出版日期:2015-11-14
移动端阅览
刘志强, 尹建芹, 张玲等. 基于Kinect数据主成分分析的人体动作识别[J]. 光学精密工程, 2015,23(10z): 702-711
LIU Zhi-qiang, YIN Jian-qin, ZHANG Ling etc. Human action recognition based on Kinect data principal component analysis[J]. Editorial Office of Optics and Precision Engineering, 2015,23(10z): 702-711
刘志强, 尹建芹, 张玲等. 基于Kinect数据主成分分析的人体动作识别[J]. 光学精密工程, 2015,23(10z): 702-711 DOI: 10.3788/OPE.20152313.0703.
LIU Zhi-qiang, YIN Jian-qin, ZHANG Ling etc. Human action recognition based on Kinect data principal component analysis[J]. Editorial Office of Optics and Precision Engineering, 2015,23(10z): 702-711 DOI: 10.3788/OPE.20152313.0703.
为了提高家庭环境下人体动作识别的效率和精度
提出并实现了基于Kinect数据主成分分析的动作识别方法。首先
通过Kinect采集人体动作特征描述的时间序列数据
并构造人体姿态描述向量;然后
运用主成分分析方法分析不同时间点的特征值的差异
获得重构的特征值
使得不同类型的动作之间有明显区别。同时减少了冗余和噪声
有利于动作的判断和识别。最后
依据重构的特征和最近邻原则
进行动作的识别分类。实验结果表明
该方法对简单的人体动作识别精度可达80%以上
单个样本识别时间分别是1.67 ms和3.93 ms
基本满足对人体动作识别的精度、抗干扰能力和实时性等要求。
To improve the efficiency and accuracy of human action recognition in the home environment
a method of action recognition based on Principal Component Analysis(PCA) of Kinect data was proposed and realized. Firstly
the time series data of human action feature description was collected by Kinect
and human pose description vector was constructed. Then
the PCA method was used to analyze the difference between the feature values at different time points and to obtain the recons-tructed eigenvalues
by which the distinction between different types of actions can be more obvious. Moreover
the action discription feature of the redundancy was filtered to reduce the redundancy and noise
which was conducive to the judgment and recognition for the human action. Finally
the recognition and classification of the action were performed based on the features of the reconstruction and the nearest neighbor principle. The experimental results show that the accuracy of this method is more than 80% for the simple human motion recognition. The identification time of a single sample is 1.67 ms and 3.93 ms
respectively. It can satisfy the human action recognition requirements for higher precision
strong anti-interference ability and real-time recognition.
田国会, 尹建芹, 韩旭, 等. 一种基于关节点信息的人体行为识别新方法[J].机器人,2014,36(3):285-292. TIAN G H, YIN J Q, HAN X, et al.. Novel human activity recognition method using joint points information[J]. Robot, 2014,36(3):285-292.(in Chinese)
LÜ F J, NEVATIA R. Single view human action recognition using key pose matching and viterbi path searching[C].IEEE Conference on Computer Vision and Pattern Recognition, CVPR'07, IEEE, 2007:1-8.
RAPTIS M, KIROVSKI D, HOPPE H. Real-time classification of dance gestures from skeleton animation[C].Proceedings of the 2011 ACM SIGGRAPH.Eurographics Symposium on Computer Animation, 2011:147-156.
ZHAO H Y, LIU ZH J. Human action recognition based on non-linear SVM decision tree[J]. Journal of Computational Information Systems, 2011,7(7):2461-2468.
YUN K, HONORIO J, CHATTOPADHYAY D, et al.. Two-person interaction detection using body-pose features and multiple instance learning[C]. 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), IEEE, 2012:28-35.
ELLIS C, MASOOD S Z, TAPPEN M F, et al.. Exploring the trade-off between accuracy and observational latency in action recognition[J]. International Journal of Computer Vision, 2013,101(3):420-436.
LIN S Y, SHIE C K, CHEN S C, et al.. Human action recognition using action trait code[C]. 2012 21st International Conference on Pattern Recognition(ICPR) IEEE, 2012:3456-3459.
SUN J, WU X, YAN SH C, et al.. Hierarchical spatio-temporal context modeling for action recognition[C].IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, IEEE, 2009:2004-2011.
DOLLÁR P, RABAUD V, COTTRELL G, et al.. Behavior recognition via sparse spatio-temporal features[C].2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, IEEE, 2005:65-72.
XIA L, CHEN C C, AGGARWAL J K. View invariant human action recognition using histograms of 3d joints[C]. 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), IEEE, 2012:20-27.
LI W Q, ZHANG ZH Y, LIU Z C. Action recognition based on a bag of 3d points[C].2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(CVPRW), IEEE, 2010:9-14.
GUHA T, WARD R K. Learning sparse representations for human action recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012,34(8):1576-1588.
CASTRODAD A, SAPIRO G. Sparse modeling of human actions from motion imagery[J]. International Journal of Computer Vision, 2012,100(1):1-15.
COHEN I, LI H. Inference of human postures by classification of 3D human body shape[C].IEEE International Workshop on Analysis and Modeling of Faces and Gestures, AMFG 2003. IEEE, 2003:74-81.
XIONG J, LIU ZH J. Human Motion Recognition Based on Hidden Markov Models[M].Berlin:Springer Berlin Heidelberg, 2007:464-471.
HAN L, WU X X, LIANG W, et al.. Discriminative human action recognition in the learned hierarchical manifold space[J]. Image and Vision Computing, 2010, 28(5):836-849.
MÜLLER M, RÖDER T. Motion templates for automatic classification and retrieval of motion capture data[C].Proceedings of the 2006 ACM SIGGRAPH.Eurographics Symposium on Computer Animation, Eurographics Association,2006:137-146.
MÜLLER M, BAAK A, SEIDEL H P. Efficient and robust annotation of motion capture data[C].Proceedings of the 2009 ACM SIGGRAPH,Eurographics Symposium on Computer Animation, ACM, 2009:17-26.
REYES M, DOMINGUEZ G, ESCALERA S. Feature weighting in dynamic timewarping for gesture recognition in depth data[C].2011 IEEE International Conference on Computer Vision Workshops(ICCV Workshops), 2011:1182-1188.
0
浏览量
304
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构