View on GitHub

Research Review Notes

Summaries of academic research papers

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets


Idea

The motivation behind this work is to be able to learn interpretable disentangled representation from the latent space that otherwise would not exhibit these properties. This is achieved by maximizing the mutual information between a subset of the latent variables and the observable (known) variable.

Background

Representation learning learns a dense embedding of the entities in our data set which can then be used for downstream tasks. It is an unsupervised form of feature extraction.

Generative Adversarial Networks

Mutual Information

Method

where $Q(c|x)$ is the distribution that approximates the posterior $P(c’|x)$

Observations