Getting a Large List of Nouns (or Adjectives) in Python with NLTK; or Python Mad Libs

It's worth noting that Wordnet is actually one of the corpora included in the NLTK downloader by default. So you could conceivably just use the solution you already found without having to reinvent any wheels.

For instance, you could just do something like this to get all noun synsets:

from nltk.corpus import wordnet as wn

for synset in list(wn.all_synsets('n')):
    print synset

# Or, equivalently
for synset in list(wn.all_synsets(wn.NOUN)):
    print synset

That example will give you every noun that you want and it will even group them into their synsets so you can try to be sure that they're being used in the correct context.

If you want to get them all into a list you can do something like the following (though this will vary quite a bit based on how you want to use the words and synsets):

all_nouns = []
for synset in wn.all_synsets('n'):
    all_nouns.extend(synset.lemma_names())

Or as a one-liner:

all_nouns = [word for synset in wn.all_synsets('n') for word in synset.lemma_names()]

You should use the Moby Parts of Speech Project data. Don't be fixated on using only what is directly in NLTK by default. It would be little work to download the files for this and pretty easy to parse them with NLTK once loaded.