skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Deep Neural Network Augmentation: Generating Faces for Affect Analysis

International journal of computer vision, 2020-05, Vol.128 (5), p.1455-1484 [Peer Reviewed Journal]

The Author(s) 2020 ;COPYRIGHT 2020 Springer ;The Author(s) 2020. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 0920-5691 ;EISSN: 1573-1405 ;DOI: 10.1007/s11263-020-01304-3

Full text available

Citations Cited by
  • Title:
    Deep Neural Network Augmentation: Generating Faces for Affect Analysis
  • Author: Kollias, Dimitrios ; Cheng, Shiyang ; Ververas, Evangelos ; Kotsia, Irene ; Zafeiriou, Stefanos
  • Subjects: Arousal ; Artificial Intelligence ; Artificial neural networks ; Augmentation ; Computer Imaging ; Computer Science ; Emotions ; Image Processing and Computer Vision ; Image reconstruction ; Neural networks ; Pattern Recognition ; Pattern Recognition and Graphics ; Performance enhancement ; Special Issue on Generating Realistic Visual Data of Human Behavior ; Synthesis ; Three dimensional models ; Vision
  • Is Part Of: International journal of computer vision, 2020-05, Vol.128 (5), p.1455-1484
  • Description: This paper presents a novel approach for synthesizing facial affect; either in terms of the six basic expressions (i.e., anger, disgust, fear, joy, sadness and surprise), or in terms of valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the emotion activation). The proposed approach accepts the following inputs:(i) a neutral 2D image of a person; (ii) a basic facial expression or a pair of valence-arousal (VA) emotional state descriptors to be generated, or a path of affect in the 2D VA space to be generated as an image sequence. In order to synthesize affect in terms of VA, for this person, 600,000 frames from the 4DFAB database were annotated. The affect synthesis is implemented by fitting a 3D Morphable Model on the neutral image, then deforming the reconstructed face and adding the inputted affect, and blending the new face with the given affect into the original image. Qualitative experiments illustrate the generation of realistic images, when the neutral image is sampled from fifteen well known lab-controlled or in-the-wild databases, including Aff-Wild, AffectNet, RAF-DB; comparisons with generative adversarial networks (GANs) show the higher quality achieved by the proposed approach. Then, quantitative experiments are conducted, in which the synthesized images are used for data augmentation in training deep neural networks to perform affect recognition over all databases; greatly improved performances are achieved when compared with state-of-the-art methods, as well as with GAN-based data augmentation, in all cases.
  • Publisher: New York: Springer US
  • Language: English
  • Identifier: ISSN: 0920-5691
    EISSN: 1573-1405
    DOI: 10.1007/s11263-020-01304-3
  • Source: AUTh Library subscriptions: ProQuest Central
    Springer Nature OA Free Journals

Searching Remote Databases, Please Wait