Week Four: Saturating Autoencoders
Hello everyone! This last week I have worked on saturating the autoencoder I created in week 1. To do this I used the jacobian matrix based approach that I discussed in week 3.
While planning to saturate my network, I created two objectives: to create resistance to small perturbations, and to not hurt the ability of the image to be classified after going through the autoencoder. To do this, I ran the autoencoder on the whole train set and test set and added the output of the latent dimension to a new train set and test set as the intended output. With this new data set, I trained a new encoder to have the same output but to be optimized such that it would become saturated. This way it would have similar outputs to the last encoder and the decoder could remain unsaturated so that the ability to recreate images is less affected. Once training was complete, I added the new encoder to the old decoder and trained the new network a bit more so the decoder was better adjusted to the slightly different outputs of the encoder. During this, I did lock the weights of the encoder so it could not get unsaturated.
Hope to see y’all again next week to see the test results against adversarial attacks.