Is an auto-encoder basically the same thing as a Restricted Boltzmann Machine?
Though the look the same, RBMs and auto-encoders are quite different beasts. An RBM is a generative model of the data, in which the conditional distribution of the hiddens given the inputs happens to be factorial (i.e. very much tractable). For Bernoulli hidden units, this distribution is basically a sigmoid, like the hidden units of a neural network. This is where the similarities end. An auto-encoder is trained using a backprop + reconstruction cost (mean-squared error or cross-entropy). An RBM is usually trained using Contrastive Divergence, an approximation to the true gradient of the likelihood (which cannot be computed tractably).