{"defaultlang":"zh","titlegroup":{"articletitle":[{"lang":"zh","data":[{"name":"text","data":"构建多尺度深度卷积神经网络行为识别模型"}]},{"lang":"en","data":[{"name":"text","data":"Action recognition model construction based on multi-scale deep convolution neural network"}]}]},"contribgroup":{"author":[{"name":[{"lang":"zh","surname":"刘","givenname":"智","namestyle":"eastern","prefix":""},{"lang":"en","surname":"LIU","givenname":"Zhi","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":["first-author"],"bio":[{"lang":"zh","text":["刘智 (1977-), 男, 江西高安人, 博士, 副教授, 2011年于四川大学计算机科学与技术专业获得博士学位, 主要从事深度学习、人体行为识别、图像处理、目标跟踪、信息融合研究。E-mail:liuzhi@cqut.edu.cn"],"graphic":[],"data":[[{"name":"text","data":"刘智 (1977-), 男, 江西高安人, 博士, 副教授, 2011年于四川大学计算机科学与技术专业获得博士学位, 主要从事深度学习、人体行为识别、图像处理、目标跟踪、信息融合研究。E-mail:"},{"name":"text","data":"liuzhi@cqut.edu.cn"}]]}],"email":"liuzhi@cqut.edu.cn","deceased":false},{"name":[{"lang":"zh","surname":"黄","givenname":"江涛","namestyle":"eastern","prefix":""},{"lang":"en","surname":"HUANG","givenname":"Jiang-tao","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff2","text":"2"}],"role":["corresp"],"corresp":[{"rid":"cor1","lang":"en","text":"HUANG Jiang-tao, E-mail: hjt@gxtc.edu.cn","data":[{"name":"text","data":"HUANG Jiang-tao, E-mail: hjt@gxtc.edu.cn"}]}],"email":"hjt@gxtc.edu.cn","deceased":false},{"name":[{"lang":"zh","surname":"冯","givenname":"欣","namestyle":"eastern","prefix":""},{"lang":"en","surname":"FENG","givenname":"Xin","namestyle":"western","prefix":""}],"stringName":[],"aff":[{"rid":"aff1","text":"1"}],"role":[],"deceased":false}],"aff":[{"id":"aff1","intro":[{"lang":"zh","label":"1","text":"重庆理工大学 计算机学院, 重庆 400054","data":[{"name":"text","data":"重庆理工大学 计算机学院, 重庆 400054"}]},{"lang":"en","label":"1","text":"College of Computer Science and Engineering, Chongqing University of Technology, Chongqing 400054, China","data":[{"name":"text","data":"College of Computer Science and Engineering, Chongqing University of Technology, Chongqing 400054, China"}]}]},{"id":"aff2","intro":[{"lang":"zh","label":"2","text":"广西师范学院 计算机与信息工程学院, 广西 南宁 530001","data":[{"name":"text","data":"广西师范学院 计算机与信息工程学院, 广西 南宁 530001"}]},{"lang":"en","label":"2","text":"College of Computer and Information Engineering, Guangxi Teachers Education University, Nanning 530001, China","data":[{"name":"text","data":"College of Computer and Information Engineering, Guangxi Teachers Education University, Nanning 530001, China"}]}]}]},"abstracts":[{"lang":"zh","data":[{"name":"p","data":[{"name":"text","data":"为了减化传统人体行为识别方法中的特征提取过程,提高所提取特征的泛化性能,本文提出了一种基于深度卷积神经网络和多尺度信息的人体行为识别方法。该方法以深度视频为研究对象,通过构建基于卷积神经网络的深度结构,并融合粗粒度的全局行为模式与细粒度的局部手部动作等多尺度信息来研究人体行为的识别。MSRDailyActivity3D数据集上的实验得出该数据集上第11~16种行为的平均识别准确率为98%,所有行为的平均识别准确率为60.625%。结果表明,本方法能对人体行为进行有效识别,基本能准确识别运动较为明显的人体行为,对仅有手部局部运动的行为的识别准确率有所下降。"}]}]},{"lang":"en","data":[{"name":"p","data":[{"name":"text","data":"In order to simplify the feature extracting process of Human Activity Recognition (HAR) and improve the generalization of extracted feature, an algorithm based on multi-scale deep convolution neural network was proposed. In this algorithm, the depth video was selected as research object and a parallel CNN (Convolution Neural Network) based deep network was constructed to process coarse global information of the action and fine-grained local information of hand part simultaneously. Experiments were executed on MSRDailyActivity3D dataset. The average recognition accuracy on actions ranging from No.11 to No.16 was 98%, while that on all actions was 60.625%. The experimental results showed that proposed algorithm could take effective recognition for human activity. Almost all of the actions with obvious movements and most of actions with local movements just in hands could be recognized effectively."}]}]}],"keyword":[{"lang":"zh","data":[[{"name":"text","data":"卷积神经网络"}],[{"name":"text","data":"深度学习"}],[{"name":"text","data":"人体行为识别"}],[{"name":"text","data":"计算机视觉"}],[{"name":"text","data":"多尺度"}]]},{"lang":"en","data":[[{"name":"text","data":"convolution neural network"}],[{"name":"text","data":"deep learning"}],[{"name":"text","data":"human activity recognition"}],[{"name":"text","data":"computer vision"}],[{"name":"text","data":"multi-scale"}]]}],"highlights":[],"body":[{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"1"}],"title":[{"name":"text","data":"引言"}],"level":"1","id":"s1"}},{"name":"p","data":[{"name":"text","data":"目前,有关人体行为识别的研究越来越引起计算机视觉研究工作者的重视,并已广泛应用于自动监控,事件检测,人机接口,视频获取等各个领域。传统的人体行为识别方法主要基于人工设计特征,如方向梯度直方图 (Histograms of Oriented Gradient,HOG)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"1","type":"bibr","rid":"b1","data":[{"name":"text","data":"1"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",运动历史图像 (Motion History Image,MHI)"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等,然后采用支持向量机"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"3","type":"bibr","rid":"b3","data":[{"name":"text","data":"3"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等分类器对提取的特征进行分类识别。WanqingLi等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"4","type":"bibr","rid":"b4","data":[{"name":"text","data":"4"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"通过提取视频中有代表性的3D词袋 (Bag of 3D Points,BOPs) 来表示人体的一系列姿势,然后以BOPs为点构建人体行为图,通过计算行为图上每一条路径的概率进行人体行为识别。文献["},{"name":"xref","data":{"text":"2","type":"bibr","rid":"b2","data":[{"name":"text","data":"2"}]}},{"name":"text","data":"]研究了运动背景下的行为识别,首先提取人体的MHI特征,然后用HOG进行特征描述,最后使用高斯混合模型 (Gaussian Mixture Model,GMM) 进行行为的分类识别。Jiang Wang"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"等人则利用深度视频中的骨架信息,通过逐帧计算每个关节相对其他关节的位置和每个关节的局部占位模式 (Local Occupancy Patterns,LOP),提出了actionlet组合模型来描述人体行为。Lu Xia和J.K.Aggarwal"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"6","type":"bibr","rid":"b6","data":[{"name":"text","data":"6"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"先抽取深度视频的时空兴趣点 (Spatio-Temporal Interest Points,STIPs),然后以各STIP为中心,构造出表示人体行为的深度立方相似特征 (Depth Cuboid Similarity Feature,DCSF)。受HOG思想的启发,Omar Oreifej和Zicheng Liu"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"7","type":"bibr","rid":"b7","data":[{"name":"text","data":"7"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"针对深度视频设计了方向四维法线直方图 (Histogram of Oriented 4D Normals,HON4D) 特征。为了同时强调人体轮廓和运动的作用,Chenyang Zhang和Yingli Tian"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"8","type":"bibr","rid":"b8","data":[{"name":"text","data":"8"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"则对深度运动图 (Depth Motion Map,DMM) 特征进行扩展,提出了边加强DMM (Edge Enhanced DMM,E"},{"name":"sup","data":[{"name":"text","data":"2"}]},{"name":"text","data":"DMM) 特征。"}]},{"name":"p","data":[{"name":"text","data":"基于人工特征提取的人体行为识别的研究取得了很多优秀成果"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"9","type":"bibr","rid":"b9","data":[{"name":"text","data":"9"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",然而也存在一些难以解决的问题:提取的特征对训练数据具有依赖性,不易泛化到其他数据;计算开销太大,很难做到实时性。深度学习能自动提取隐藏在数据间的多层特征表示,已经成功应用于语音识别,图像识别与分类,分割等领域。鉴于深度学习的上述优点,Quoc V.Le等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"10","type":"bibr","rid":"b10","data":[{"name":"text","data":"10"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"运用独立子空间分析 (Independent Subspace Analysis,ISA) 算法自动学习视频数据中稳定的时空特征,然后使用深度结构学习ISA的多层表示。文献["},{"name":"xref","data":{"text":"11","type":"bibr","rid":"b11","data":[{"name":"text","data":"11"}]}},{"name":"text","data":"]利用CNN构造多层深度结构,提出了PANDA算法,用于识别人的属性 (如性别、发型、表情等)。DeepPose"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"12","type":"bibr","rid":"b12","data":[{"name":"text","data":"12"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"方法也是基于CNN构建深度神经网络,该方法不但用于图像中人体姿势的识别,也对图像中的目标定位进行了探索。文献["},{"name":"xref","data":{"text":"13","type":"bibr","rid":"b13","data":[{"name":"text","data":"13"}]}},{"name":"text","data":"]则基于限制波尔茨曼机,构造出自举深度信念网络,用于人脸的识别。Kaiming He等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"14","type":"bibr","rid":"b14","data":[{"name":"text","data":"14"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"在其最新的研究中同样使用了基于CNN的深度神经网络,其贡献在于使用空间池化技术对输入进行处理,从而使得该算法能对任何大小的图像进行分类,而传统基于CNN的深度学习方法需要将输入规范化到统一尺寸。为了提高深度学习算法的泛化性能,Min Lin等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"提出了网络嵌套的思想,即网络中的某一个节点可以嵌套一个网络进行学习。文献["},{"name":"xref","data":{"text":"16","type":"bibr","rid":"b16","data":[{"name":"text","data":"16"}]}},{"name":"text","data":"]不但深刻剖析了基于CNN的深度神经网络的思想,而且还借鉴了Min Lin等人"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"15","type":"bibr","rid":"b15","data":[{"name":"text","data":"15"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"的思想,提出了一个更深层次的网络,取得了较好的效果。"}]},{"name":"p","data":[{"name":"text","data":"综上,基于特征提取的算法时间开销太大,难以实现实时处理。近些年来,基于CNN的深度神经网络在人工智能领域的应用较为广泛,然而关于它的研究主要集中在图像识别、分割、定位等方面,对基于视频的人体行为识别的研究仍比较少。同时相较于传统RGB视频,深度视频能提供人体的三维几何信息,而且对光线变化不敏感"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"17","type":"bibr","rid":"b17","data":[{"name":"text","data":"17"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"。基于此,本文以深度视频数据为研究对象,通过构建基于CNN的深度神经网络结构,并融合全局的人体行为信息和局部的手部动作等多尺度信息,使用传统的二维CNN来研究三维的人体行为识别。本文的创新在于:"}]},{"name":"p","data":[{"name":"text","data":"(1) 使用图像处理中的二维CNN构建深度卷积神经网络并用于人体行为识别;"}]},{"name":"p","data":[{"name":"text","data":"(2) 所提出的方法不依赖于人工设计特征,不需要对数据进行复杂预处理,流程简单。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"2"}],"title":[{"name":"text","data":"基于多尺度信息融合和深度学习的人体行为识别"}],"level":"1","id":"s2"}},{"name":"p","data":[{"name":"text","data":"传统的基于CNN和深度学习的网络结构适合于二维图像的处理,不能直接应用于三维的视频数据集。如果将深度视频的每一帧看做图像的一个特征图 (Feature Map,FM),则一个具有"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"帧的深度视频可以看做具有"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"个FM的图像。然而由于描述人体行为的视频帧数"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"一般都比较大,因而直接使用传统网络将带来巨大的时间开销。本文通过构建多个深度网络,组成并行结构来研究深度视频的人体行为识别。首先将深度视频先拆分成多个视频段,然后分别使用各并行分支网络进行学习,再对各网络分支学习到的高层表示进行融合连接,最后将融合后的高层表示送入全连接层和分类层进行分类识别。与此同时,针对MSRDailyActivity3D数据集中大部分行为的细微差别主要集中于左手这一特点,如读书、写字、用笔记本电脑、玩游戏等行为。本文除了提取粗粒度的全局行为信息之外,还提取了每个视频左手处的细粒度信息,通过融合粗粒度和细粒度等多尺度信息来完成人体行为识别。对其他数据集则根据具体情况提取不同部位或多个部位的细粒度信息。"},{"name":"xref","data":{"text":"图 1","type":"fig","rid":"Figure1","data":[{"name":"text","data":"图 1"}]}},{"name":"text","data":"给出了具有3个CNN层和2个全连接层的深度网络结构图。根据实验目的的不同,本文使用了不同层数的网络,如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"所示。"}]},{"name":"fig","data":{"id":"Figure1","caption":[{"lang":"zh","label":[{"name":"text","data":"图1"}],"title":[{"name":"text","data":"基于CNN和深度学习的人体行为识别框架"}]},{"lang":"en","label":[{"name":"text","data":"Fig 1"}],"title":[{"name":"text","data":"Framework for HAR based on CNN and deep learning"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754149&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754149&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754149&type=middle"}]}},{"name":"table","data":{"id":"Table1","caption":[{"lang":"zh","label":[{"name":"text","data":"表1"}],"title":[{"name":"text","data":"本文使用到的深度网络及其参数"}]},{"lang":"en","label":[{"name":"text","data":"Table 1"}],"title":[{"name":"text","data":"Deep networks and their parameters in this paper"}]}],"note":[],"table":[{"head":[[{"style":"class:table_top_border","data":[{"name":"text","data":"网络"}]},{"style":"class:table_top_border","data":[{"name":"text","data":"层数"}]},{"style":"class:table_top_border","data":[{"name":"text","data":"卷积核"}]},{"style":"class:table_top_border","data":[{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"f1"}]},{"name":"text","data":","},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"f2"}]},{"name":"text","data":",…"}]},{"style":"class:table_top_border","data":[{"name":"text","data":"全连接层"}]}]],"body":[[{"style":"class:table_top_border2","data":[{"name":"text","data":"2CNN2F"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"2"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"5×5,5×5"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"32,128"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"1 024,512"}]}],[{"data":[{"name":"text","data":"3CNN2F"}]},{"data":[{"name":"text","data":"3"}]},{"data":[{"name":"text","data":"5×5,7×7,5×5"}]},{"data":[{"name":"text","data":"32,64,128"}]},{"data":[{"name":"text","data":"1 024,512"}]}],[{"style":"class:table_bottom_border","data":[{"name":"text","data":"4CNN2F"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"4"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"5×5,5×5,6×6,5×5"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"16,32,64,128"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"1 024,512"}]}]],"foot":[]}]}},{"name":"p","data":[{"name":"text","data":"算法步骤描述:假设规范化后表示一个行为的视频大小为"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"(本文中为192×128×128),其中"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":","},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"分别为视频帧的宽和高。"}]},{"name":"p","data":[{"name":"text","data":"(1) 将帧数为"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"的行为视频以"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Stride"}]},{"name":"text","data":"为步长进行分段,其中每段包含"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"帧,则分段数为"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"=1+("},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"-"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":")/"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Stride"}]},{"name":"text","data":",然后将视频帧1/4下采样,则分段后形成了"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4的视频段矩阵;"}]},{"name":"p","data":[{"name":"text","data":"(2) 以深度视频每一帧的左手关节为中心,截取"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4大小的帧组成"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4的新视频,对新视频采取步骤 (1) 方法得到"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4的视频段矩阵。"}]},{"name":"p","data":[{"name":"text","data":"(3) 将步骤 (1) 和步骤 (2) 得到的视频段矩阵进行融合得到2"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4的视频段矩阵;该视频段矩阵即为深度网络的输入,即该网络具有2"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"个并行深度神经网络,每个深度神经网络的输入为"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"×"},{"name":"italic","data":[{"name":"text","data":"W"}]},{"name":"text","data":"/4×"},{"name":"italic","data":[{"name":"text","data":"H"}]},{"name":"text","data":"/4的视频;"}]},{"name":"p","data":[{"name":"text","data":"(4) 使用训练数据集对并行深度神经网络进行训练,然后使用测试数据集进行人体行为识别的测试,训练数据集和被试数据集完全不相交。本文中选择{1,3,5,7,9}表演的行为视频用于训练,而将{2,4,6,8,10}表演的行为视频用于测试。"}]},{"name":"p","data":[{"name":"text","data":"假设"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"=192,"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"=16,"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Stride"}]},{"name":"text","data":"=16,则深度神经网络框架需要采用24个并行网络,每个网络的输入为16×32×32的视频段序列,即每个视频段含有16帧视频,视频图像大小为32×32。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3"}],"title":[{"name":"text","data":"实验及讨论"}],"level":"1","id":"s3"}},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.1"}],"title":[{"name":"text","data":"数据集及预处理"}],"level":"2","id":"s3-1"}},{"name":"p","data":[{"name":"text","data":"本文使用Kinect设备采集的MSRDailyActivity3D数据集进行实验"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]},{"name":"text","data":",该数据集收集了日常生活中常见的16种行为:喝水、吃零食、读书、打电话、写字、用笔记本电脑、用吸尘器、欢呼、静止站立、撕纸、玩游戏、躺下沙发、行走、弹吉他、站起和坐下。每个行为动作由同一主试以两种不同的方式完成:坐在沙发上或站着。整个数据集共有320个行为视频。"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":"给出了该数据集中的一些行为样例。该数据集记录了人体行为和周围环境的交互,提取出的深度信息含有大量的噪声,而且数据集中的大部分行为只在局部存在细微差异,如"},{"name":"xref","data":{"text":"图 2","type":"fig","rid":"Figure2","data":[{"name":"text","data":"图 2"}]}},{"name":"text","data":","},{"name":"xref","data":{"text":"图 3","type":"fig","rid":"Figure3","data":[{"name":"text","data":"图 3"}]}},{"name":"text","data":"所示,因而极具挑战性。"}]},{"name":"fig","data":{"id":"Figure2","caption":[{"lang":"zh","label":[{"name":"text","data":"图2"}],"title":[{"name":"text","data":"MSRDailyActivity3D中的行为视频 (处理前,上:喝水,下:写字)"}]},{"lang":"en","label":[{"name":"text","data":"Fig 2"}],"title":[{"name":"text","data":"Activity videos before processing (top:drinking, bottom:writing)"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754155&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754155&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754155&type=middle"}]}},{"name":"fig","data":{"id":"Figure3","caption":[{"lang":"zh","label":[{"name":"text","data":"图3"}],"title":[{"name":"text","data":"MSRDailyActivity3D中的行为视频 (处理后,上:喝水,下:写字)"}]},{"lang":"en","label":[{"name":"text","data":"Fig 3"}],"title":[{"name":"text","data":"Activity videos after processing (top:drinking, bottom:writing)"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754160&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754160&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754160&type=middle"}]}},{"name":"p","data":[{"name":"text","data":"在实验前,对每个视频进行简单的预处理,如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示。(1) 背景去除:深度摄像机记录的是每一点的位置信息,相对于运动目标,深度视频中背景的位置信息是固定不变的,根据该特点可去除背景信息;(2) 边界框确定:针对每一个视频,分别根据其每一帧,得出能并且仅能框住人体行为的边界框,取所有帧的最大边界框作为本视频的边界框,如"},{"name":"xref","data":{"text":"图 4","type":"fig","rid":"Figure4","data":[{"name":"text","data":"图 4"}]}},{"name":"text","data":"所示;(3) 规范化,包括空间、时间和深度信息规范化:空间规范化,直接使用matlab中的imresize函数将图像缩放到指定大小,时间规范化,使用插值技术 (公式1) 将所有视频规范化到统一长度,规范化后的视频帧数等于所有视频帧数的中间值,深度信息规范是使用MinMax算法将所有视频的像素值规范化到[0, 1]范围;(4) 将所有样本进行水平翻转形成新的样本,从而使数据集中的训练样本成倍扩大。本文算法采用Torch平台"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"18","type":"bibr","rid":"b18","data":[{"name":"text","data":"18"}]}},{"name":"text","data":"]"}]},{"name":"text","data":"进行编写,其中的学习速率为1×10"},{"name":"sup","data":[{"name":"text","data":"-4"}]},{"name":"text","data":",损失函数为平台自带的Softmax回归,激活函数为双曲正切 (tanh) 函数。"}]},{"name":"fig","data":{"id":"Figure4","caption":[{"lang":"zh","label":[{"name":"text","data":"图4"}],"title":[{"name":"text","data":"行为视频预处理简要步骤"}]},{"lang":"en","label":[{"name":"text","data":"Fig 4"}],"title":[{"name":"text","data":"Brief steps of retreatment for activity videos"}]}],"subcaption":[],"note":[],"graphics":[{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754163&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754163&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754163&type=middle"}]}},{"name":"p","data":[{"name":"dispformula","data":{"label":[{"name":"text","data":"1"}],"data":[{"name":"text","data":" "},{"name":"text","data":" "},{"name":"math","data":{"graphicsData":{"print":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754169&type=","small":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754169&type=small","big":"http://html.publish.founderss.cn/rc-pub/api/common/picture?pictureId=1754169&type=middle"}}}],"id":"gxjmgc-25-3-799-E1"}}]},{"name":"p","data":[{"name":"text","data":"其中"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"N"}]},{"name":"sub","data":[{"name":"text","data":"F"}]},{"name":"text","data":"分别为规范化前后视频含有的帧数,则规范化后第"},{"name":"italic","data":[{"name":"text","data":"j"}]},{"name":"text","data":"帧来自于规范化前视频中的第"},{"name":"italic","data":[{"name":"text","data":"i"}]},{"name":"text","data":"帧。其中的括号为上取整。"}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.2"}],"title":[{"name":"text","data":"基于多尺度信息融合和深度学习的HAR识别"}],"level":"2","id":"s3-2"}},{"name":"p","data":[{"name":"text","data":"根据第2节的描述,本文使用"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"中的2CNN2F网络,将粗粒度的全局行为识别视频和细粒度的手部动作序列等多尺度信息作为深度网络的输入。本节实验中的"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Stride"}]},{"name":"text","data":"和"},{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]},{"name":"text","data":"均设置为16,即将抽取整个视频的12×16×32×32的全局行为序列和12×16×32×32局部手部动作序列合并,形成24×16×32×32输入视频矩阵。"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"给出了本文方法与其他方法在MSRDailyActivity3D数据集上识别性能的对比结果。其中2CNN2F是指仅使用粗粒度的全局行为信息,而2CNN2F_J则表示多尺度信息融合方法,它融合了粗粒度的行为模式信息和细粒度的手部动作。从"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"可看出,本文方法的行为识别准确度为60.625%,如果仅使用粗粒度的全局行为信息,其识别率稍有降低,为56.875%,其识别性能和传统人工特征提取方法具有可比性。从实验数据也可看出,粗细粒度信息的融合能有效提高识别准确度。然而,手部细粒度信息的添加对识别准确度的贡献并不大,可能是因为左手关节点处于变化之中,以左手关节点为中心截取的视频只能反映手部细节信息,丢失了重要的运动轨迹信息。值得注意的是,如果仅对第11~16个行为 (即玩游戏、躺到沙发上、行走、弹吉他、站起和坐下) 进行识别,则识别准确率达98%,这是因为第11~16个行为间具有较大的差异,而数据集中的其他行为之间的差异则非常细微,如读书、写字、用笔记本电脑几个行为仅是手部动作有细微差别。实验结果说明,使用深度学习方法能够有效进行行为识别,尤其是当各行为动作差别较大时,识别率会得到显著提高。"}]},{"name":"table","data":{"id":"Table2","caption":[{"lang":"zh","label":[{"name":"text","data":"表2"}],"title":[{"name":"text","data":"本文方法与人工特征提取方法识别性能比较"}]},{"lang":"en","label":[{"name":"text","data":"Table 2"}],"title":[{"name":"text","data":"Performance comparison between artificial feature extraction method and proposed method"}]}],"note":[],"table":[{"head":[[{"style":"class:table_top_border","data":[{"name":"text","data":"算法"}]},{"style":"class:table_top_border","data":[{"name":"text","data":"识别率/%"}]}]],"body":[[{"style":"class:table_top_border2","data":[{"name":"text","data":"LOP features"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"42.5"}]}],[{"data":[{"name":"text","data":"Joint Position features"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"5","type":"bibr","rid":"b5","data":[{"name":"text","data":"5"}]}},{"name":"text","data":"]"}]}]},{"data":[{"name":"text","data":"68"}]}],[{"data":[{"name":"text","data":"Dynamic Temporal Warping"},{"name":"sup","data":[{"name":"text","data":"["},{"name":"xref","data":{"text":"19","type":"bibr","rid":"b19","data":[{"name":"text","data":"19"}]}},{"name":"text","data":"]"}]}]},{"data":[{"name":"text","data":"54"}]}],[{"data":[{"name":"text","data":"2CNN2F"}]},{"data":[{"name":"text","data":"56.875"}]}],[{"style":"class:table_bottom_border","data":[{"name":"text","data":"2CNN2F_J"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"60.625"}]}]],"foot":[]}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.3"}],"title":[{"name":"text","data":"网络深度对识别的影响"}],"level":"2","id":"s3-3"}},{"name":"p","data":[{"name":"text","data":"关于如何构造深度神经网络到目前为止仍没有规律可循,现有的网络均是基于研究者的经验和实验探索。本文通过构建含3层CNN和4层CNN的神经网络,即3CNN2F_8和4CNN2F (如"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"所示),探讨了网络深度对识别效果的影响。网络参数如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":"所示。为了保证网络不过渡拟合,本实验使用24×8×128×128的视频序列作为神经网络的输入,即将规范化后的192×128×128视频,以8为步长,拆分成24个8×128×128的视频段,同时输入到具有24个并行结构的神经网络,此处只考虑了粗粒度信息,没有融合细粒度信息。由"},{"name":"xref","data":{"text":"表 2","type":"table","rid":"Table2","data":[{"name":"text","data":"表 2"}]}},{"name":"text","data":"可知,使用3CNN2F_8网络时的识别率为52.5%,而使用4CNN2F的识别率为58.75%。由于实验数据限制,本文难以提供更深或更浅深度网络的实验结果,现有结果可能意味着网络深度的增加对提高行为识别率有一定的影响,但若要增加网络深度,必须提供更多的训练样本以防止过度拟合。"}]},{"name":"table","data":{"id":"Table3","caption":[{"lang":"zh","label":[{"name":"text","data":"表3"}],"title":[{"name":"text","data":"不同网络中的参数配置及识别率"}]},{"lang":"en","label":[{"name":"text","data":"Table 3"}],"title":[{"name":"text","data":"Recognition accuracies in different networks with different parameters"}]}],"note":[],"table":[{"head":[[{"style":"class:table_top_border","data":[{"name":"text","data":"实验网络"}]},{"style":"class:table_top_border","data":[{"name":"text","data":"网络输入"}]},{"style":"class:table_top_border","data":[{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Seg"}]}]},{"style":"class:table_top_border","data":[{"name":"italic","data":[{"name":"text","data":"L"}]},{"name":"sub","data":[{"name":"text","data":"Stride"}]}]},{"style":"class:table_top_border","data":[{"name":"text","data":"识别率/%"}]}]],"body":[[{"style":"class:table_top_border2","data":[{"name":"text","data":"2CNN2F"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"12×16×32×32"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"16"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"16"}]},{"style":"class:table_top_border2","data":[{"name":"text","data":"56.875"}]}],[{"data":[{"name":"text","data":"2CNN2F_J"}]},{"data":[{"name":"text","data":"24×16×32×32"}]},{"data":[{"name":"text","data":"16"}]},{"data":[{"name":"text","data":"16"}]},{"data":[{"name":"text","data":"60.625"}]}],[{"data":[{"name":"text","data":"3CNN2F_8"}]},{"data":[{"name":"text","data":"24×8×128×128"}]},{"data":[{"name":"text","data":"8"}]},{"data":[{"name":"text","data":"8"}]},{"data":[{"name":"text","data":"52.5"}]}],[{"data":[{"name":"text","data":"3CNN2F_4"}]},{"data":[{"name":"text","data":"47×8×128×128"}]},{"data":[{"name":"text","data":"8"}]},{"data":[{"name":"text","data":"4"}]},{"data":[{"name":"text","data":"56.875"}]}],[{"style":"class:table_bottom_border","data":[{"name":"text","data":"4CNN2F"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"24×8×128×128"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"8"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"8"}]},{"style":"class:table_bottom_border","data":[{"name":"text","data":"58.75"}]}]],"foot":[]}]}}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"3.4"}],"title":[{"name":"text","data":"拆分步长对识别的影响"}],"level":"2","id":"s3-4"}},{"name":"p","data":[{"name":"text","data":"为了检验拆分步长对识别效果的影响,本文针对3CNN2F构建了两个不同输入的网络:3CNN2F_8和3CNN2F_4(网络参数如"},{"name":"xref","data":{"text":"表 1","type":"table","rid":"Table1","data":[{"name":"text","data":"表 1"}]}},{"name":"text","data":","},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"所示)。为简化处理,本次实验也只考虑粗粒度信息作为输入,因此3CNN2F_8的输入为24×8×128×128的视频序列,而3CNN2F_4的输入的大小为47×8×128×128,即将规范化后的192×128×128视频,以步长为4,拆分成47个8×128×128的视频段,拆分后,相邻两个视频段间有4帧的重复。实验结果如"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"所示。由"},{"name":"xref","data":{"text":"表 3","type":"table","rid":"Table3","data":[{"name":"text","data":"表 3"}]}},{"name":"text","data":"可知,步长为8时,识别准确率为52.5%,而步长为4时,识别准确率为56.875%。识别率得到有效提高,主要是因为一方面步长越小,拆分的视频段越多,深度网络需要的并行分支也越多,在横向上变的更宽,网络参数越多,网络的泛化能力越好;另一方面,步长的减小和拆分视频段的增加,同时也增加了训练数据,使网络训练效果更好。"}]}]}]},{"name":"sec","data":[{"name":"sectitle","data":{"label":[{"name":"text","data":"4"}],"title":[{"name":"text","data":"结论"}],"level":"1","id":"s4"}},{"name":"p","data":[{"name":"text","data":"鉴于深度视频可以描述物体的几何结构,而且对光线、颜色不敏感,本文以深度视频为研究对象,采用传统的二维CNN方法构建深度神经网络,对MSRDailyActivity3D数据集中的行为进行分类识别。实验及结果表明,本文提出的基于CNN的深度学习方法能够对以深度视频表示的人体行为进行有效识别,对MSRDailyActivity3D数据集中行为差异较大的躺下、行走、弹吉他、站起和坐下5个行为的平均识别准确率为98%,对整个数据集上所有行为的识别准确率为60.625%。接下来,本文还对如何提高深度学习的识别率进行了一定的探索。研究发现减小拆分视频段的步长,融合粗粒度和细粒度的视频信息,适当增加网络深度均能有效提高深度网络的识别率。未来的研究方向将主要集中在从不同粒度,不同信息源等方面进行信息融合以提高人体行为识别率。"}]}]}],"footnote":[],"reflist":{"title":[{"name":"text","data":"参考文献"}],"data":[{"id":"b1","label":"1","citation":[{"lang":"en","text":[{"name":"text","data":"DALAL N, TRIGGS B. Histograms of oriented gradients for human detection [C]."},{"name":"italic","data":[{"name":"text","data":"IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2005: 886-893."}]}]},{"id":"b2","label":"2","citation":[{"lang":"en","text":[{"name":"text","data":"TIAN Y L, CAO L L, LIU Z C, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Hierarchical filtered motion for action recognition in crowded videos [J]."},{"name":"italic","data":[{"name":"text","data":"IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews,"}]},{"name":"text","data":"2012, 42(3): 313-323."}]}]},{"id":"b3","label":"3","citation":[{"lang":"zh","text":[{"name":"text","data":"张迪飞, 张金锁, 姚克明, 等.基于SVM分类的红外舰船目标识别[J].红外与激光工程, 2016, 45(1):167-172."}]},{"lang":"en","text":[{"name":"text","data":"ZHANG D F, ZHANG J S, YAO K M, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Infrared ship-target recognition based on SVM classification [J]."},{"name":"italic","data":[{"name":"text","data":"Infrared and Laser Engineering,"}]},{"name":"text","data":"2016, 45(1):167-172. (inchinese)"}]}]},{"id":"b4","label":"4","citation":[{"lang":"en","text":[{"name":"text","data":"LI W, ZHANG Z, LIU Z. Action recognition based on a bag of 3D points [C]."},{"name":"italic","data":[{"name":"text","data":"2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2010:9-14."}]}]},{"id":"b5","label":"5","citation":[{"lang":"en","text":[{"name":"text","data":"WANG J, LIU Z C, WU Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Mining actionlet ensemble for action recognition with depth cameras [C]"},{"name":"text","data":"."},{"name":"italic","data":[{"name":"text","data":"2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Piscataway, NJ: IEEE.,"}]},{"name":"text","data":"2012:1290-1297."}]}]},{"id":"b6","label":"6","citation":[{"lang":"en","text":[{"name":"text","data":"XIA L, AGGARWAL J K. Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera [C]."},{"name":"italic","data":[{"name":"text","data":"2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2013:2834-2841."}]}]},{"id":"b7","label":"7","citation":[{"lang":"en","text":[{"name":"text","data":"OREIFEJ O, LIU Z. Hon4d: histogram of oriented 4D normals for activity recognition from depth sequences [C]."},{"name":"italic","data":[{"name":"text","data":"2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2013:716-723."}]}]},{"id":"b8","label":"8","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG C Y, TIAN Y L. Edge enhanced depth motion map for dynamic hand gesture recognition [C]."},{"name":"italic","data":[{"name":"text","data":"2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2013:500-505."}]}]},{"id":"b9","label":"9","citation":[{"lang":"en","text":[{"name":"text","data":"YE M, ZHANG Q, WANG L, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. A survey on human motion analysis from depth data [J]."},{"name":"italic","data":[{"name":"text","data":"Time-of-Flight and Depth Imaging, Sensors, Algorithms, and Applications, Springer,"}]},{"name":"text","data":"2013:149-187."}]}]},{"id":"b10","label":"10","citation":[{"lang":"en","text":[{"name":"text","data":"LE Q V, ZOU W Y, YEUNG S Y, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis [C]."},{"name":"italic","data":[{"name":"text","data":"2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE,"}]},{"name":"text","data":"2011:3361-3368."}]}]},{"id":"b11","label":"11","citation":[{"lang":"en","text":[{"name":"text","data":"ZHANG N, PALURI M, RANZATO M, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Panda: pose aligned networks for deep attribute modeling [C]."},{"name":"italic","data":[{"name":"text","data":"2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE,"}]},{"name":"text","data":"2014:1637-1644."}]}]},{"id":"b12","label":"12","citation":[{"lang":"en","text":[{"name":"text","data":"TOSHEV A, SZEGEDY C. Deeppose: human pose estimation via deep neural networks [C]."},{"name":"italic","data":[{"name":"text","data":"2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE,"}]},{"name":"text","data":"2014:1653-1660."}]}]},{"id":"b13","label":"13","citation":[{"lang":"en","text":[{"name":"text","data":"LIU P, HAN S, MENG Z, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Facial expression recognition via a boosted deep belief network [C]."},{"name":"italic","data":[{"name":"text","data":"2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ:IEEE,"}]},{"name":"text","data":"2014:1805-1812."}]}]},{"id":"b14","label":"14","citation":[{"lang":"en","text":[{"name":"text","data":"HE K, ZHANG X, REN S, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Spatial pyramid pooling in deep convolutional networks for visual recognition [C]."},{"name":"italic","data":[{"name":"text","data":"Computer Vision-ECCV 2014, Springer,"}]},{"name":"text","data":"2014:346-361."}]}]},{"id":"b15","label":"15","citation":[{"lang":"en","text":[{"name":"text","data":"LIN M, CHEN Q, YAN S. Network in network [J]."},{"name":"italic","data":[{"name":"text","data":"Computer Science,"}]},{"name":"text","data":"2014."}]}]},{"id":"b16","label":"16","citation":[{"lang":"en","text":[{"name":"text","data":"SZEGEDY C, LIU W, JIA Y Q, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Going deeper with convolutions [C]."},{"name":"italic","data":[{"name":"text","data":"2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),"}]},{"name":"text","data":"2015:1-9."}]}]},{"id":"b17","label":"17","citation":[{"lang":"zh","text":[{"name":"text","data":"陈芬, 郑迪, 彭宗举, 等.基于模式复杂度的深度视频快速宏块模式选择算法[J].光学 精密工程, 2014, 22(8):2196-2204."}]},{"lang":"en","text":[{"name":"text","data":"CHEN F, ZHENG D, PENG Z J, "},{"name":"italic","data":[{"name":"text","data":"et al"}]},{"name":"text","data":".. Depth video fast macroblock mode selection algorithm based on mode complexity [J]."},{"name":"italic","data":[{"name":"text","data":"Opt. Precision Eng.,"}]},{"name":"text","data":" 2014, 22(8):2196-2204.(inchinese)"}]}]},{"id":"b18","label":"18","citation":[{"lang":"en","text":[{"name":"text","data":"COLLOBERT R, KAVUKCUOGLU K, FARABET C. Torch7: A matlab-like environment for machine learning [R]."},{"name":"italic","data":[{"name":"text","data":"BigLearn, NIPS Workshop,"}]},{"name":"text","data":" 2011."}]}]},{"id":"b19","label":"19","citation":[{"lang":"en","text":[{"name":"text","data":"MÜLLER M, RÖDER T. Motion templates for automatic classification and retrieval of motion capture data [C]."},{"name":"italic","data":[{"name":"text","data":"Proceedings of the 2006 ACM SIGGRAPH, Eurographics Association,"}]},{"name":"text","data":"2006: 137-146."}]}]}]},"response":[],"contributions":[],"acknowledgements":[],"conflict":[],"supportedby":[],"articlemeta":{"doi":"10.3788/OPE.20172503.0799","clc":[[{"name":"text","data":"TP394.1;TH691.9"}]],"dc":[],"publisherid":"gxjmgc-25-3-799","citeme":[],"fundinggroup":[{"lang":"zh","text":[{"name":"text","data":"重庆市教委科学技术研究基金资助项目(No.KJ1400926);广西自然科学基金重点项目(No.2014GXNSFDA118037)"}]}],"history":{"received":"2016-12-21","accepted":"2017-01-15","ppub":"2017-03-25","opub":"2020-06-16"},"copyright":{"data":[{"lang":"zh","data":[{"name":"text","data":"版权所有©《光学 精密工程》编辑部2017"}],"type":"copyright"},{"lang":"en","data":[{"name":"text","data":"Copyright ©2017 Optics and Precision Engineering. All rights reserved."}],"type":"copyright"}],"year":"2017"}},"appendix":[],"type":"research-article","ethics":[],"backSec":[],"supplementary":[],"journalTitle":"光学 精密工程","issue":"3","volume":"25","originalSource":[]}