Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks
Chapter
Accepted version
Åpne
Permanent lenke
https://hdl.handle.net/11250/2650281Utgivelsesdato
2019Metadata
Vis full innførselSamlinger
Originalversjon
10.1109/SIU.2019.8806432Sammendrag
Nowadays, machine learning is being used widely. There have also been attacks towards machine learning process. In this study, robustness against machine learning model attacks which cause many results such as misclassification, disruption of decision mechanisms and avoidance of filters has been shown by autoencoding and with non-targeted attacks to a model trained with Mnist dataset. In this work, the results and improvements for the most common and important attack method, non-targeted attack are presented.