Decomposition of weights into interpretable components and its relation to normalization layer statistics
Автор: Shokorov V.A., Samosiuk A.V.
Журнал: Труды Московского физико-технического института @trudy-mipt
Рубрика: Информатика и управление
Статья в выпуске: 4 (68) т.17, 2025 года.
Бесплатный доступ
In linear spaces, a specific threshold describes the noise-level correlation. Linear layers in neural network models, whose feature vectors are designed to interact with particular data features, are also susceptible to this effect. This allows Out-of-Distribution (OOD) data to produce activations within the variance of the training domain in normalization layers (BatchNorm). To estimate the degree of random activation, we employ a decomposition of the linear layer’s weight matrix into interpretable components: a signal component (𝑊Δ) and a noise component (𝑊𝑟𝑎𝑛𝑑), based on the bounds of the Marchenko – Pastur distribution. Experiments on ResNet-50 (trained with ArcFace on MS1Mv3) with OOD data (COCO) demonstrate that: (1) The interaction of training data with (𝑊Δ) is significantly stronger than with (𝑊𝑟𝑎𝑛𝑑); (2) The distribution of OOD data activations when interacting with (𝑊𝑟𝑎𝑛𝑑) and (𝑊Δ) is statistically indistinguishable, corresponding to the level of random activation. We conclude that (𝑊𝑟𝑎𝑛𝑑) serves as a reliable indicator for the threshold of random activation.
BatchNorm statistics, spectral weights decomposition, explainable artificial intelligence, Marchenko – Pastur distribution
Короткий адрес: https://sciup.org/142247122
IDR: 142247122 | УДК: 004.93’1, 004.825