Print layer outputs in Keras during training

I think I have found an answer myself, although not strictly accomplished by Keras.

Basically, to access layer output during training, one needs to modify the computation graph by adding a print node.

A more detailed description can be found in this StackOverflow question:
How can I print the intermediate variables in the loss function in TensorFlow and Keras?

I will quote an example here, say you would like to have your loss get printed per step, you need to set your custom loss function as:

for Theano backend:

diff = y_pred - y_true
diff = theano.printing.Print('shape of diff', attrs=['shape'])(diff)
return K.square(diff)

for Tensorflow backend:

diff = y_pred - y_true
diff = tf.Print(diff, [tf.shape(diff)])
return K.square(diff)

Outputs of other layers can be accessed similarly.

There is also a nice vice tutorial about using tf.Print() from Google
Using tf.Print() in TensorFlow


If you want to know more info on each neuron, you need to use the following to get their bias and weights.

weights = model.layers[0].get_weights()[0]
biases = model.layers[0].get_weights()[1]

0 index defines weights and 1 defines the bias.

You can also get per layer too,

for layer in model.layers:
    weights = layer.get_weights() # list of numpy arrays

After each training, if you can access each layer with its dimension and obtain the weights and bias to a numpy array, you should be able to visualize how the neuron after each training.

Hope it helps.