Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks
Original version
10.1109/SIU.2019.8806432Abstract
Nowadays, machine learning is being used widely. There have also been attacks towards machine learning process. In this study, robustness against machine learning model attacks which cause many results such as misclassification, disruption of decision mechanisms and avoidance of filters has been shown by autoencoding and with non-targeted attacks to a model trained with Mnist dataset. In this work, the results and improvements for the most common and important attack method, non-targeted attack are presented.