Властивості розподілів штучно згенерованих зображень
dc.contributor.advisor | Крюкова, Галина | |
dc.contributor.author | Іванюк-Скульський, Богдан | |
dc.date.accessioned | 2020-12-06T14:05:05Z | |
dc.date.available | 2020-12-06T14:05:05Z | |
dc.date.issued | 2020 | |
dc.description.abstract | In recent years, machine learning and, in particular, deep learning (DL) models have improved their performance in various tasks, e.g., image classification, speech recognition, natural language processing. However, even state-of-the-art models are vulnerable to so called adversarial perturbations. These perturbations applied to a correctly classified sample aren’t visible for a human eye but lead to misclassification of the sample [5, 12, 13, 18, 19]. Clearly that such an issue may cause serious consequences in the applications where safety and security are priority, for example, autonomous driving. There have been recent attempts to explain this phenomenon, see e.g., [5], but a consistent theory is still missing. In this paper, we propose a new approach to adversarial image detection. Our approach relies on the assumption that an adversarial perturbation pushes a sample away from a manifold where the correctly classified samples are concentrated. This allows us to use distributions of certain distances for detecting adversarial samples. | uk_UA |
dc.identifier.uri | https://ekmair.ukma.edu.ua/handle/123456789/19010 | |
dc.language.iso | en | uk_UA |
dc.status | first published | uk_UA |
dc.subject | розподіл | uk_UA |
dc.subject | штучно згенеровано | uk_UA |
dc.subject | зображення | uk_UA |
dc.subject | бакалаврська робота | uk_UA |
dc.title | Властивості розподілів штучно згенерованих зображень | uk_UA |
dc.type | Other | uk_UA |