Exploring the feasibility of neural network obfuscation and the operation of obfuscated neural networks in classification problem

Бесплатный доступ

There are cases when it may be necessary to protect the knowledge of the network without actually removing it from the network. The critical challenge is the ability of an attacker to efficiently reconstruct inputs that produce a desired output, using methods far more effective than brute-force search. To address this, we propose to use obfuscation approach, which increases the complexity of model inversion and evasion attacks. This paper presents a defense mechanism based on low-parameter non-monotonic activation functions (modifications of ReLU), designed to increase adversarial complexity and protect model internals in transparent-box (white-box) environments. The paper shows heuristics (stochastic transfer and equivalent exchange) and components (Immortal ReLU variants) that improve the convergence of a network with a non-monotonic activation function. Each neuron with such function can emulate up to two standard neurons, which can sometimes reduce the network size by up to 2 times without significant loss of quality.

Еще

Activation function, non-monotonic activation function, Non-monotonic ReLU, Immortal ReLU, nmReLU, nmiReLU, iReLU, ilReLU, white-box neural network security, transparent-box neural network security, defence adversal attacks

Короткий адрес: https://sciup.org/142247119

IDR: 142247119   |   УДК: 004.032.26