浏览全部资源
扫码关注微信
1.武昌理工学院 人工智能学院,湖北 武汉 430223
2.武汉理工大学 安全科学与应急管理学院,湖北 武汉 430070
3.东华理工大学 自然资源部环鄱阳湖区域矿山环境监测与治理重点实验室, 江西 南昌 330013
4.长江水利委员会长江科学院,湖北 武汉 430019
Published:25 June 2024,
Received:28 November 2023,
Revised:30 January 2024,
移动端阅览
陈西江,孙曦,赵不钒等.顾及最优分配和最佳掩码的点云部件分割[J].光学精密工程,2024,32(12):1941-1953.
CHEN Xijiang,SUN Xi,ZHAO Bufan,et al.Part segmentation method of point cloud considering optimal allocation and optimal mask[J].Optics and Precision Engineering,2024,32(12):1941-1953.
陈西江,孙曦,赵不钒等.顾及最优分配和最佳掩码的点云部件分割[J].光学精密工程,2024,32(12):1941-1953. DOI: 10.37188/OPE.20243212.1941.
CHEN Xijiang,SUN Xi,ZHAO Bufan,et al.Part segmentation method of point cloud considering optimal allocation and optimal mask[J].Optics and Precision Engineering,2024,32(12):1941-1953. DOI: 10.37188/OPE.20243212.1941.
为了增强网络的泛化能力,提升部件分割的精度,本文提出了一种顾及最优分配和最佳掩码的点云部件分割方法。首先,根据推土机距离定义两个点云之间的最优分配;然后利用最远点采样对点云分组,计算每个分组中点的显著性,再利用球查询确定点云的最佳掩码,以保留原始点云的语义信息;最后,将一点云中显著性高的点的邻域替换掉另一点云中显著性低的点的邻域,从而实现点云之间的混合增强。本文在ShapeNet数据集上进行验证,使用本方法进行增强数据再输送到PointNet,PointNet++以及DGCNN模型中,其mIoU从83.7%,85.1%,85.1%分别增加到了85.1%,86.3%以及86.0%,有效提升了部件分割的效果。
In order to enhance the generalization ability of the network and improve the accuracy of part segmentation, this paper proposed a method for part segmentation of point cloud considering the optimal allocation and the optimal mask. Firstly, the optimal allocation between two point clouds was defined according to Earth Mover's Distance. Then the point cloud was grouped by the Farthest Point Sampling, the significance of each point in the grouping is calculated, and the optimal mask of the point cloud was determined by the ball query to preserve the semantic information of the original point cloud. Finally, the neighborhood of a point with high significance in one cloud was replaced by the neighborhood of a point with low significance in another cloud, so as to achieve hybrid enhancement between point clouds. In this paper, the data was verified on ShapeNet data set, and the method was enhanced to PointNet, PointNet++ and DGCNN models. The mIoU increased from 83.7%, 85.1% and 85.1% to 85.1%, 86.3% and 86.0% respectively, effectively improving the effect of component segmentation.
数据增强点云部件分割显著性
data augmentationpoint cloudpart segmentationsaliency
CHARLES R Q, HAO S, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA. IEEE, 2017: 77-85. doi: 10.1109/cvpr.2017.16http://dx.doi.org/10.1109/cvpr.2017.16
QI C R, YI L, SU H, et al. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space[EB/OL]. 2017: arXiv: 1706.02413. http://arxiv.org/abs/1706.02413http://arxiv.org/abs/1706.02413
WANG Y, SUN Y B, LIU Z W, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1-12. doi: 10.1145/3326362http://dx.doi.org/10.1145/3326362
LI Y, BU R, SUN M, et al. PointCNN: Convolution on Χ-transformed points[C]. Advances in neural information processing systems, 2018, 31.
SHEN Y R, FENG C, YANG Y Q, et al. Mining point cloud local structures by kernel correlation and graph pooling[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA. IEEE, 2018: 4548-4557. doi: 10.1109/cvpr.2018.00478http://dx.doi.org/10.1109/cvpr.2018.00478
WU W X, QI Z A, LI F X. PointConv: deep convolutional networks on 3D point clouds[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA. IEEE, 2019: 9613-9622. doi: 10.1109/cvpr.2019.00985http://dx.doi.org/10.1109/cvpr.2019.00985
THOMAS H, QI C R, DESCHAUD J E, et al. KPConv: flexible and deformable convolution for point clouds[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South). IEEE, 2019: 6410-6419. doi: 10.1109/iccv.2019.00651http://dx.doi.org/10.1109/iccv.2019.00651
XU M T, DING R Y, ZHAO H S, et al. PAConv: position adaptive convolution with dynamic kernel assembling on point clouds[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA. IEEE, 2021: 3172-3181. doi: 10.1109/cvpr46437.2021.00319http://dx.doi.org/10.1109/cvpr46437.2021.00319
HALEVY A, NORVIG P, PEREIRA F. The unreasonable effectiveness of data[J]. IEEE Intelligent Systems, 2009, 24(2): 8-12. doi: 10.1109/mis.2009.36http://dx.doi.org/10.1109/mis.2009.36
SUN C, SHRIVASTAVA A, SINGH S, et al. Revisiting unreasonable effectiveness of data in deep learning era[C]. 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy. IEEE, 2017: 843-852. doi: 10.1109/iccv.2017.97http://dx.doi.org/10.1109/iccv.2017.97
Chen Y, Hu V T, Gavves E, et al. Pointmixup: augmentation for point clouds[C]. Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part III 16. Springer International Publishing, 2020: 330-345. doi: 10.1007/978-3-030-58580-8_20http://dx.doi.org/10.1007/978-3-030-58580-8_20
KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. doi: 10.1145/3065386http://dx.doi.org/10.1145/3065386
LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. doi: 10.1109/5.726791http://dx.doi.org/10.1109/5.726791
MORENO-BAREA F J, STRAZZERA F, JEREZ J M, et al. Forward noise adjustment scheme for data augmentation[C]. 2018 IEEE Symposium Series on Computational Intelligence (SSCI). Bangalore, India. IEEE, 2018: 728-734. doi: 10.1109/ssci.2018.8628917http://dx.doi.org/10.1109/ssci.2018.8628917
WU R, YAN S G, SHAN Y, et al. Deep Image: Scaling up Image Recognition[EB/OL]. 2015: arXiv: 1501.02876. http://arxiv.org/abs/1501.02876http://arxiv.org/abs/1501.02876
INOUE H. Data augmentation by pairing samples for images classification[J]. arXiv preprint, arXiv:1801.02929, 2018. doi: 10.48550/arXiv.1801.02929http://dx.doi.org/10.48550/arXiv.1801.02929
ZHONG Z, ZHENG L, KANG G L, et al. Random erasing data augmentation[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7): 13001-13008. doi: 10.1609/aaai.v34i07.7000http://dx.doi.org/10.1609/aaai.v34i07.7000
DEVRIES T, TAYLOR G W. Improved Regularization of Convolutional Neural Networks with Cutout[EB/OL]. 2017: arXiv: 1708.04552. http://arxiv.org/abs/1708.04552http://arxiv.org/abs/1708.04552. doi: 10.1109/icme.2017.8019402http://dx.doi.org/10.1109/icme.2017.8019402
CHEN P G, LIU S, ZHAO H S, et al. GridMask Data Augmentation[EB/OL]. 2020: arXiv: 2001.04086. http://arxiv.org/abs/2001.04086http://arxiv.org/abs/2001.04086
ZHANG H Y, CISSE M, DAUPHIN Y N, et al. Mixup: Beyond Empirical Risk Minimization[EB/OL]. 2017: arXiv: 1710.09412. http://arxiv.org/abs/1710.09412http://arxiv.org/abs/1710.09412. doi: 10.1007/978-1-4899-7687-1_79http://dx.doi.org/10.1007/978-1-4899-7687-1_79
YUN S, HAN D, CHUN S, et al. CutMix: Regularization strategy to train strong classifiers with localizable features[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South). IEEE, 2019: 6022-6031. doi: 10.1109/iccv.2019.00612http://dx.doi.org/10.1109/iccv.2019.00612
CRESWELL A, WHITE T, DUMOULIN V, et al. Generative adversarial networks: an overview[J]. IEEE Signal Processing Magazine, 2018, 35(1): 53-65. doi: 10.1109/msp.2017.2765202http://dx.doi.org/10.1109/msp.2017.2765202
CUBUK E D, ZOPH B, MANE D, et al. AutoAugment: Learning Augmentation Policies from Data[EB/OL]. 2018: arXiv: 1805.09501. http://arxiv.org/abs/1805.09501http://arxiv.org/abs/1805.09501. doi: 10.1109/cvpr.2019.00020http://dx.doi.org/10.1109/cvpr.2019.00020
SHESHAPPANAVAR S V, SINGH V V, KAMBHAMETTU C. PatchAugment: local neighborhood augmentation in point cloud classification[C]. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Montreal, BC, Canada. IEEE, 2021: 2118-2127. doi: 10.1109/iccvw54120.2021.00240http://dx.doi.org/10.1109/iccvw54120.2021.00240
KIM S, LEE S, HWANG D, et al. Point cloud augmentation with weighted local transformations[C]. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada. IEEE, 2021: 528-537. doi: 10.1109/iccv48922.2021.00059http://dx.doi.org/10.1109/iccv48922.2021.00059
LI R H, LI X Z, HENG P A, et al. PointAugment: an auto-augmentation framework for point cloud classification[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA. IEEE, 2020: 6377-6386. doi: 10.1109/cvpr42600.2020.00641http://dx.doi.org/10.1109/cvpr42600.2020.00641
ZHANG J, CHEN L, OUYANG B, et al. Pointcutmix: Regularization strategy for point cloud classification[J]. Neurocomputing. 2022, 505: 58-67. doi: 10.1016/j.neucom.2022.07.049http://dx.doi.org/10.1016/j.neucom.2022.07.049
LEE D, LEE J, LEE J, et al. Regularization strategy for point cloud via rigidly mixed sample[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA. IEEE, 2021: 15895-15904. doi: 10.1109/cvpr46437.2021.01564http://dx.doi.org/10.1109/cvpr46437.2021.01564
UDDIN A F M S, MONIRA M S, SHIN W, et al. SaliencyMix: a Saliency Guided Data Augmentation Strategy for Better Regularization[EB/OL]. 2020: arXiv: 2006.01791. http://arxiv.org/abs/2006.01791http://arxiv.org/abs/2006.01791
KIM J H, CHOO W, SONG H O. Puzzle Mix: exploiting saliency and local statistics for optimal mixup[C]. International Conference on Machine Learning. PMLR, 2020: 5275-5285. doi: 10.48550/arXiv.2009.06962http://dx.doi.org/10.48550/arXiv.2009.06962
HUANG S L, WANG X C, TAO D C. SnapMix: semantically proportional mixing for augmenting fine-grained data[C]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(2): 1628-1636. doi: 10.1609/aaai.v35i2.16255http://dx.doi.org/10.1609/aaai.v35i2.16255
LEE S, JEON M, KIM I, et al. Sagemix: saliency-guided mixup for point clouds[C]. Advances in Neural Information Processing Systems, 2022, 35:23580-23592.
LEE C H, VARSHNEY A, JACOBS D W. Mesh saliency[C]. ACM SIGGRAPH 2005 Papers. Los Angeles California. ACM, 2005: 659-666. doi: 10.1145/1186822.1073244http://dx.doi.org/10.1145/1186822.1073244
CHANG AX, FUNKHOUSER T, GUIBAS L, et al. ShapeNet: an information-rich 3d model repository[J]. arXiv preprint, arXiv:1512.03012, 2015.
0
Views
24
下载量
0
CSCD
Publicity Resources
Related Articles
Related Author
Related Institution