iterate over pyspark dataframe columns
Have you tried something like this:
names = df.schema.names
for name in names:
print(name + ': ' + df.where(df[name].isNull()).count())
You can see how this could be modified to put the information into a dictionary or some other more useful format.