特黄三级爱爱视频|国产1区2区强奸|舌L子伦熟妇aV|日韩美腿激情一区|6月丁香综合久久|一级毛片免费试看|在线黄色电影免费|国产主播自拍一区|99精品热爱视频|亚洲黄色先锋一区

結(jié)合自適應(yīng)局部圖卷積與多尺度時間建模的骨架行為識別

  • 打印
  • 收藏
收藏成功


打開文本圖片集

關(guān)鍵詞:局部圖卷積;自適應(yīng)圖;多尺度時間建模;行為識別

中圖分類號:TP391.41 文獻(xiàn)標(biāo)志碼:A 文章編號:1001-3695(2025)07-037-2199-07

doi:10. 19734/j. issn. 1001-3695.2024.08.0370

Abstract:Giventheinherent topologicalstructurecharacteristicsofthehumanskeletonresearchersefectivelymodelskeleton datausing graph convolution networks forbehaviorrecognition.However,chalenges arisein skeleton behaviorrecognition methods becausetimeconvolutionreliesonafixedtopological graphstructureandfixed kemel size,whichmakes itdificult to adapttovariableactiontypes,osures,andbehavioraldurations.Thisrelianceleads to modeling erorsandafectsrecogition accuracy.To tacklethis isue,this paper proposed a skeleton behaviorrecognitionmethodthatcombined adaptivelocal graph convolutionwithmulti-saletemporalmodeling.Thismethodalowedfortheindependentdynamiccharacterizationoftheuman skeletalstructurethroughtheadaptivelocal graphconvolutionmodule.Itdesignedthemulti-scaletemporalmodeling moduleto accommodatebehaviorsofvaryingdurationswhilereducing thenumberof parametersandcomputational complexity.Furthermore,itintroducedthespatio-temporalDropGraphstructuretodynamicalladjustthegraphtopology,whichimprovedthe model's generalization ability and prevents overfiting. The experiments show that it achieves accuracy rates of 93.39% and 97.18% under the cross-object C-Sub and cross-view C-View benchmarks for the NTU RGB+D60 dataset,respectively,and (20 90.48% and 91.95% under the cross-object C-Sub and cross-set C-Set benchmarks for the NTU RGB+D 120 dataset,respectively.Theseresultsoutperformthoseofexisting behavioralrecognitionmethods,proving thesuperiorityof theapproach.

Key words:local graph convolution;adaptive graph;multi-scale time modeling;behavior recognition

0引言

作為計算機視覺領(lǐng)域的核心課題之一,行為識別在虛擬現(xiàn)實、智能家居和自動駕駛等多個領(lǐng)域中顯示出重要性和廣闊的應(yīng)用潛力。(剩余18789字)

目錄
monitor