Властивості розподілів штучно згенерованих зображень

Loading...
Thumbnail Image
Date
2020
Authors
Іванюк-Скульський, Богдан
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
In recent years, machine learning and, in particular, deep learning (DL) models have improved their performance in various tasks, e.g., image classification, speech recognition, natural language processing. However, even state-of-the-art models are vulnerable to so called adversarial perturbations. These perturbations applied to a correctly classified sample aren’t visible for a human eye but lead to misclassification of the sample [5, 12, 13, 18, 19]. Clearly that such an issue may cause serious consequences in the applications where safety and security are priority, for example, autonomous driving. There have been recent attempts to explain this phenomenon, see e.g., [5], but a consistent theory is still missing. In this paper, we propose a new approach to adversarial image detection. Our approach relies on the assumption that an adversarial perturbation pushes a sample away from a manifold where the correctly classified samples are concentrated. This allows us to use distributions of certain distances for detecting adversarial samples.
Description
Keywords
розподіл, штучно згенеровано, зображення, бакалаврська робота
Citation