Cемантична сегментація зображень з використанням Transformer архітектури
dc.contributor.advisor | Швай, Надія | |
dc.contributor.author | Іванюк-Скульський, Богдан | |
dc.date.accessioned | 2024-04-10T11:17:40Z | |
dc.date.available | 2024-04-10T11:17:40Z | |
dc.date.issued | 2022 | |
dc.description.abstract | In this work we have presented a model that efficiently balances between local representations obtained by convolution blocks and a global representations obtained by transformer blocks. Proposed model outperforms, previously, standard decoder architecture DeepLabV3 by at least 1% Jaccard index with smaller number of parameters. In the best case this improvement is of 7%. As part of our future work we plan to experiment with (1) MS COCO dataset pretraining (2) hyperparameters search. | uk_UA |
dc.identifier.uri | https://ekmair.ukma.edu.ua/handle/123456789/28826 | |
dc.language.iso | en | uk_UA |
dc.relation.organisation | НаУКМА | uk_UA |
dc.status | first published | uk_UA |
dc.subject | AlexNet | uk_UA |
dc.subject | Transformer Encoder blocks | uk_UA |
dc.subject | Jaccard index | uk_UA |
dc.subject | DeepLabV3 | uk_UA |
dc.subject | магістерська робота | uk_UA |
dc.title | Cемантична сегментація зображень з використанням Transformer архітектури | uk_UA |
dc.type | Other | uk_UA |