Face recognition in the video stream. Self-attention neural aggregation network
Loading...
Date
2020
Authors
Проценко, Ігор
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The models based on self-attention mechanisms have been successful in analyzing
temporal data and have been widely used in the natural language domain. A new model
architecture is being proposed for video face representation and recognition based on the
self-attention mechanism. Moreover, given approach could be used for video with single
and multiple identities. Notably, no one explored the aggregation approaches that consider
the video with multiple identities. The proposed approach utilizes existing models to get
the face representation for each video frame, e.g., ArcFace and MobileFaceNet, and the
aggregation module produces the aggregated face representation vector for video by
taking into consideration the order of frames and their quality scores. Empirical results are
demonstrated on a public dataset for video face recognition called IJB-C to indicate that
the self-attention aggregation network (SAAN) outperforms naive average pooling.
Moreover, a new multi-identity video dataset based on the publicly available UMDFaces
dataset and collected GIFs from Giphy is being proposed. It is shown that SAAN is
capable of producing a compact face representation for both single and multiple identities
in a video. The source code is attached in the archive.
Description
Keywords
face recognition, the video stream, neural aggregation network, self-attention, бакалаврська робота