Python Gensim: how to calculate document similarity using the LDA model?

Depends what similarity metric you want to use.

Cosine similarity is universally useful & built-in:

sim = gensim.matutils.cossim(vec_lda1, vec_lda2)

Hellinger distance is useful for similarity between probability distributions (such as LDA topics):

import numpy as np
dense1 = gensim.matutils.sparse2full(lda_vec1, lda.num_topics)
dense2 = gensim.matutils.sparse2full(lda_vec2, lda.num_topics)
sim = np.sqrt(0.5 * ((np.sqrt(dense1) - np.sqrt(dense2))**2).sum())

Don't know if this'll help but, I managed to attain successful results on document matching and similarities when using the actual document as a query.

dictionary = corpora.Dictionary.load('dictionary.dict')
corpus = corpora.MmCorpus("corpus.mm")
lda = models.LdaModel.load("model.lda") #result from running online lda (training)

index = similarities.MatrixSimilarity(lda[corpus])
index.save("simIndex.index")

docname = "docs/the_doc.txt"
doc = open(docname, 'r').read()
vec_bow = dictionary.doc2bow(doc.lower().split())
vec_lda = lda[vec_bow]

sims = index[vec_lda]
sims = sorted(enumerate(sims), key=lambda item: -item[1])
print sims

Your similarity score between all documents residing in the corpus and the document that was used as a query will be the second index of every sim for sims.