Monday 23 July 2018

Doubly Stochastic Adversarial Autoencoder. (arXiv:1807.07603v1 [cs.LG])

Any autoencoder network can be turned into a generative model by imposing an arbitrary prior distribution on its hidden code vector. Variational Autoencoder (VAE) [2] uses a KL divergence penalty to impose the prior, whereas Adversarial Autoencoder (AAE) [1] uses {\it generative adversarial networks} GAN [3]. GAN trades the complexities of {\it sampling} algorithms with the complexities of {\it searching} Nash equilibrium in minimax games. Such minimax architectures get trained with the help of data examples and gradients flowing through a generator and an adversary. A straightforward modification of AAE is to replace the adversary with the maximum mean discrepancy (MMD) test [4-5]. This replacement leads to a new type of probabilistic autoencoder, which is also discussed in our paper. We propose a novel probabilistic autoencoder in which the adversary of AAE is replaced with a space of {\it stochastic} functions. This replacement introduces a new source of randomness, which can be considered as a continuous control for encouraging {\it explorations}. This prevents the adversary from fitting too closely to the generator and therefore leads to a more diverse set of generated samples.



from cs updates on arXiv.org https://ift.tt/2LqC0WJ
//

0 comments:

Post a Comment