dc.contributor.author | Catak, Ferhat Özgur | |
dc.contributor.author | Sivaslioglu, Samed | |
dc.contributor.author | Gul, Ensar | |
dc.date.accessioned | 2020-04-03T10:23:50Z | |
dc.date.available | 2020-04-03T10:23:50Z | |
dc.date.created | 2020-01-17T11:22:27Z | |
dc.date.issued | 2019 | |
dc.identifier.isbn | 978-1-7281-1904-5 | |
dc.identifier.uri | https://hdl.handle.net/11250/2650281 | |
dc.description.abstract | Nowadays, machine learning is being used widely. There have also been attacks towards machine learning process. In this study, robustness against machine learning model attacks which cause many results such as misclassification, disruption of decision mechanisms and avoidance of filters has been shown by autoencoding and with non-targeted attacks to a model trained with Mnist dataset. In this work, the results and improvements for the most common and important attack method, non-targeted attack are presented. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | en_US |
dc.title | Incrementing Adversarial Robustness with Autoencoding for Machine Learning Model Attacks | en_US |
dc.type | Chapter | en_US |
dc.description.version | acceptedVersion | en_US |
dc.source.pagenumber | 4 | en_US |
dc.identifier.doi | 10.1109/SIU.2019.8806432 | |
dc.identifier.cristin | 1775618 | |
dc.description.localcode | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
cristin.unitcode | 194,63,30,0 | |
cristin.unitname | Institutt for informasjonssikkerhet og kommunikasjonsteknologi | |
cristin.ispublished | true | |
cristin.fulltext | postprint | |
cristin.qualitycode | 1 | |