面向點(diǎn)云分類和分割的形狀自適應(yīng)特征聚合網(wǎng)絡(luò)
doi:10.37188/OPE.20253305.0777 CSTR:32169.14.OPE.20253305.0777
Shape adaptive feature aggregation network for point cloud classification and segmentation
JIANG Zhihao1,ZHANGMeixiang1,XUEWeitao2,F(xiàn)ULina1, WEN Jing1,LI Yongqiang2*,HUANG Hong
(1.Key Laboratory of Optoelectronic Technology and System,Ministry of Education, Chongqing University,Chongqing 4Oo044, China; 2. Product Testing Center,Beijing Institute ofSpace Machinery and Electronics,Beijing 1Ooo94,China) * Corresponding author,E-mail: hhuang@cqu. edu. cn;99yongqiang@163. com
Abstract: The classification and segmentation of point clouds are widely applicable in robotic navigation, virtual reality,and autonomous driving. Most current deep learning approaches for point cloud processing employ multilayer perceptrons (MLPs) with shared weights and single pooling operations to aggregate lo cal features. This methodology often hinders the accurate representation of structural information within point clouds exhibiting complex arrangements. To address these challenges,a novel point cloud shapeadaptive local feature encoding method was proposed,aimed at efectively capturing the structural information of point clouds with diverse geometric configurations while enhancing classification and segmentation performance. Initially,an adaptive feature enhancement module was introduced,this module utilized differentiation and learnable adjustment factors to strengthen the feature representation,compensating for the descriptive limitations inherent in shared weight MLPs.Building on this foundation,a feature aggregation module was designed to assgn variable weights to distinct points based on their absolute spatial distances. This approach facilitates adaptation to the variable shapes of point cloud structures,accentuates representative point sets,and enables a more precise depiction of local structural information.Experimental evaluations conducted on three extensive public point cloud datasets reveal that the proposed method achieves exceptional performance in both classification and segmentation tasks,attining an overall instance average classification accuracy of 93.9% on the ModelNet4O dataset,along with mean intersection over union (mIoU) scores of 85.9% and 59.7% on the ShapeNet and S3DIS datasets,respectively.
Key words: deep learning;point cloud classification;point cloud segmentation;local feature aggregation
1引言
3D點(diǎn)云包含了豐富的結(jié)構(gòu)信息和尺度信息,能夠更好地描述真實(shí)的3D世界,已廣泛應(yīng)用于機(jī)器人導(dǎo)航1、虛擬現(xiàn)實(shí)、醫(yī)學(xué)影像3和自動(dòng)駕駛4等領(lǐng)域。(剩余15375字)
-
-
- 光學(xué)精密工程
- 2025年05期
- 亞多普勒波長(zhǎng)調(diào)制NICE-OH...
- 光學(xué)曲面的偏折測(cè)量技術(shù):原理、...
- 系統(tǒng)積熱對(duì)紅外偏振成像系統(tǒng)工作...
- 基于近遠(yuǎn)場(chǎng)結(jié)合的微結(jié)構(gòu)調(diào)制像差...
- 基于長(zhǎng)度角度并行傳感的多目標(biāo)位...
- 多束電子源設(shè)計(jì)與實(shí)驗(yàn)...
- 高陡度曲面光學(xué)元件的虛擬軸磁流...
- 基于機(jī)器視覺的透明軟管內(nèi)微量液...
- 面向點(diǎn)云分類和分割的形狀自適應(yīng)...
- 半監(jiān)督式野生動(dòng)物夜間目標(biāo)端到端...
- 自適應(yīng)色彩補(bǔ)償和多尺度融合的水...
- 基于機(jī)載的紅外動(dòng)態(tài)目標(biāo)視頻實(shí)時(shí)...
- 復(fù)雜空心渦輪葉片點(diǎn)云快速配準(zhǔn)...