Week Three: Saturating Networks
Hello everyone! I have spent this last week focusing on reading about saturating networks and studying more advanced topics in linear algebra. Saturating a network is the practice of making all the weight values on the more extreme ends, meaning they are either pretty big or pretty small with little in between. This is a practice that is generally avoided in machine learning because it can make training a network harder and hurt classification results, however for my purposes of countering adversarial attacks, it is useful.
Though saturating a network can hurt regular classification, it can help with the resistance to adversarial attack because more extreme weights cause the output to be less sensitive to small perturbations. This makes it perfect for my project.
To learn more, I looked at Aran Nayebi et. al.’s 2017 paper “Biologically inspired protection of deep networks from adversarial attacks”. In this paper, the authors explain their process of saturating neural networks using the jacobian matrix of the loss with respect to the input. They reported that networks saturated like this were far more resistant to adversarial attacks. My plan is to use this same technique except use it on an autoencoder in order to have a fast defense against adversarial attacks that doesn’t require retraining a big network to defend it.