X

Week Nine: Analysis and More Experimentation

May 27, 2022

Hello everyone! This last week I have done some thinking about the results of last week’s experiment and continued doing various small tests.

In week eight I created a graph that shows how neural networks end up with high confidence in far-out regions of the input space. Interestingly I did not see any adversarial subspaces like I intended to visualize. This makes me wonder whether adversarial subspaces are really just the classifications extending outward as we saw in the graph from week eight. Next week I definitely will want to run tests and see how methods of countering adversarial attacks through using alternative structures of neural networks classify space differently. If I see that they cause lower confidence farther out from the data, this would go towards proving that adversarial subspaces are the result of the confidence not dropping off as it moves away from the data.

This week I also experimented with the distances between inputs and I wrote code to see how classification changes when you go from one point in input space to another. The second of these two experiments did not show too much of interest. I was looking to see if there were adversarial subspaces between data points however on the few data points I tested there were none. This lead me to measure the distances between points and I found that adversarial examples of an image were always very close to the attack image and relatively far away from any point of the same class as the adversarial example. This is slightly expected but also interesting as it suggests that adversarial examples are not created bc of too large classification areas around groups of points.

Leave a Reply