Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

Abastract

This paper mainly focused on solving the problem of image-translation with unpaired images. Image-to-image translation is a class of vision and graphics problems where the goal is to learn to the mapping between an input image and an output image using a training set of aligned image pairs. However, obtaining large amount of paired images could be quite expensive. The author proposed learn a translation from source domain \(X\) to target domain \(Y\) in the absence of paired images. The goal is to learn a map \(G:X\rightarrow Y\), such that the distribution of \(G(X)\) is indistinguishable from \(Y\) using an adversarial loss. The author coupled it with reverse mapping \(F:Y\rightarrow X\) and cycle-consistency loss \(F(G(X))\approx X\)

reference

  • Image-to-image translation with conditional adversarial networks
  • Deep generative image models using a laplacian pyramid of adversarial networks
  • Generative visual manipulation on the natural image manifold
  • Improved techniques for training gans
  • Learning dense correspondence via 3dguided cycle consistency
  • Unsupervised monocular depth estimation with left-right consistency

Formulation

Comments

Popular posts from this blog

Object detection

Common Words