skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

DRIT++: Diverse Image-to-Image Translation via Disentangled Representations

International journal of computer vision, 2020-11, Vol.128 (10-11), p.2402-2417 [Peer Reviewed Journal]

Springer Science+Business Media, LLC, part of Springer Nature 2020 ;COPYRIGHT 2020 Springer ;ISSN: 0920-5691 ;EISSN: 1573-1405 ;DOI: 10.1007/s11263-019-01284-z

Full text available

Citations Cited by
  • Title:
    DRIT++: Diverse Image-to-Image Translation via Disentangled Representations
  • Author: Lee, Hsin-Ying ; Tseng, Hung-Yu ; Mao, Qi ; Huang, Jia-Bin ; Lu, Yu-Ding ; Singh, Maneesh ; Yang, Ming-Hsuan
  • Subjects: Artificial Intelligence ; Computer Imaging ; Computer Science ; Image Processing and Computer Vision ; Pattern Recognition ; Pattern Recognition and Graphics ; Special Issue on Generative Adversarial Networks for Computer Vision ; Vision
  • Is Part Of: International journal of computer vision, 2020-11, Vol.128 (10-11), p.2402-2417
  • Description: Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: (1) lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images. To synthesize diverse outputs, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and attribute vectors sampled from the attribute space to synthesize diverse outputs at test time. To handle unpaired training data, we introduce a cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative evaluations, we measure realism with user study and FrĂ©chet inception distance, and measure diversity with the perceptual distance metric, Jensen–Shannon divergence, and number of statistically-different bins.
  • Publisher: New York: Springer US
  • Language: English
  • Identifier: ISSN: 0920-5691
    EISSN: 1573-1405
    DOI: 10.1007/s11263-019-01284-z
  • Source: ProQuest Central

Searching Remote Databases, Please Wait