19.7 Alternative autoencoders

  • Variational autoencoders are a form of generative autoencoders, which means they can be used to create new instances that closely resemble the input data but are completely generated from the coding distributions.

  • Adversarial autoencoders train two networks: (I) a generator network to reconstruct the inputs similar to a regular autoencoder and then (II) a discriminator network to compute where the inputs lie on a probabilistic distribution and improve the generator.

  • Contractive autoencoders similar to denoising autoencoders constrain the derivative of the hidden layer(s) activations to be small with respect to the inputs.

  • Winner-take-all autoencoders leverage only the top X% activations for each neuron, while the rest are set to zero.

  • Stacked convolutional autoencoders are designed to reconstruct visual features processed through convolutional layers without transforming the image to vectors.