Tied weights in Autoencoder

Autoencoders with tied weights have some important advantages :

  1. It's easier to learn.
  2. In linear case it's equvialent to PCA - this may lead to more geometrically adequate coding.
  3. Tied weights are sort of regularisation.

But of course - they're not perfect : they may not be optimal when your data comes from highly nolinear manifold. Depending on size of your data I would try both approaches - with tied weights and not if it's possible.

UPDATE :

You asked also why representation which comes from autoencoder with tight weights might be better than one without. Of course it's not the case that such representation is always better but if the reconstruction error is sensible then different units in coding layer represents something which might be considered as generators of perpendicular features which are explaining the most of the variance in data (exatly like PCAs do). This is why such representation might be pretty useful in further phase of learning.