Властивості розподілів штучно згенерованих зображень

dc.contributor.advisorКрюкова, Галина
dc.contributor.authorІванюк-Скульський, Богдан
dc.date.accessioned2020-12-06T14:05:05Z
dc.date.available2020-12-06T14:05:05Z
dc.date.issued2020
dc.description.abstractIn recent years, machine learning and, in particular, deep learning (DL) models have improved their performance in various tasks, e.g., image classification, speech recognition, natural language processing. However, even state-of-the-art models are vulnerable to so called adversarial perturbations. These perturbations applied to a correctly classified sample aren’t visible for a human eye but lead to misclassification of the sample [5, 12, 13, 18, 19]. Clearly that such an issue may cause serious consequences in the applications where safety and security are priority, for example, autonomous driving. There have been recent attempts to explain this phenomenon, see e.g., [5], but a consistent theory is still missing. In this paper, we propose a new approach to adversarial image detection. Our approach relies on the assumption that an adversarial perturbation pushes a sample away from a manifold where the correctly classified samples are concentrated. This allows us to use distributions of certain distances for detecting adversarial samples.uk_UA
dc.identifier.urihttps://ekmair.ukma.edu.ua/handle/123456789/19010
dc.language.isoenuk_UA
dc.statusfirst publisheduk_UA
dc.subjectрозподілuk_UA
dc.subjectштучно згенерованоuk_UA
dc.subjectзображенняuk_UA
dc.subjectбакалаврська роботаuk_UA
dc.titleВластивості розподілів штучно згенерованих зображеньuk_UA
dc.typeOtheruk_UA
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ivaniuk_Bakalavrska_robota.pdf
Size:
2.74 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
7.54 KB
Format:
Item-specific license agreed upon to submission
Description: