一維卷積神經(jīng)網(wǎng)絡(luò)在機(jī)械故障特征提取中的可解釋性研究

打開(kāi)文本圖片集
關(guān)鍵詞:可解釋性;一維卷積神經(jīng)網(wǎng)絡(luò);傅里葉變換;故障診斷;頻域中圖分類(lèi)號(hào):TH133.33文獻(xiàn)標(biāo)志碼:ADOI:10.7652/xjtuxb202507003文章編號(hào): 0253-987X(2025)07-0024-12
Study on the Interpretability of One-Dimensional Convolutional Neural Networksin Mechanical FaultFeatureExtraction
WANG Fangzhen1,ZHAGN Xiaoli1,ZHAO Qiwu1,WANG Baojian2 (1.The Ministry of Education KeyLaboratory of Road Construction Technology and Equipment,Chang’an University, Xi'an 70o64,China;2.TheNational Demonstration Center for Mechanical Foundation Experimental Teaching, Xi'anJiaotongUniversity,Xi'an 71oo49,China)
Abstract: To address the limited interpretability and reliability caused by the unknown internal decision-making and inference processes of one-dimensional convolutional neural networks (CNNs) in mechanical fault diagnosis,a similarity connection between signal analysis and neural networks is established from the perspective of feature extraction. By extracting the weights of the convolutional layers in the neural network and observing the variations in time/frequency domain features as the network layers change, this study reveals the intrinsic feature extraction behavior of neural networks. Experimental test data and publicly available bearing data from Case Western Reserve University are used for validation. The results indicate that the convolutional kernel can be equivalent to a finite impulse filter,and the max pooling layer can meet the non-linear requirements of neural networks for simple binary classification tasks,therefore not requiring an activation function in the convolutional layer; the neural network is capable of incrementally increasing frequency resolution layer by layer to identify frequency components close to theoretical fault characteristic frequencies,exhibiting similarities to Fourier transforms. When the spectral range is ultimately decomposed to 1 to 3 times the fault characteristic frequency,the identification task is beter accomplished. This study can provide new ideas and methods for revealing the“black box" mechanisms and interpretability of convolutional neural networks.
Keywords: interpretability;one-dimensional convolutional neural networks;Fourier transform; fault diagnosis;frequency domain
卷積神經(jīng)網(wǎng)絡(luò)(CNN)因其特征提取能力強(qiáng)、計(jì)算量小以及能夠自動(dòng)完成特征提取和故障模式分類(lèi)的端到端診斷特點(diǎn),已逐漸成為機(jī)械健康監(jiān)測(cè)與智能維護(hù)的重要方法[1-2]。(剩余17278字)