Recent work has shown local convergence of training methods for GANs for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, un-regularized GAN training is not always convergent.
For further reading
- @AI News: Image clustering using transfer learning
- @AI News: The 3 missing roles that every data science team needs to hire