How do you get the name of the tensorflow output nodes in a Keras Model?

You can use Keras model.summary() to get the name of the last layer.

If model.outputs is not empty you can get the node names via:

[node.op.name for node in model.outputs]

you get the session via

session = keras.backend.get_session()

and you convert all training variables to consts via

min_graph = convert_variables_to_constants(session, session.graph_def, [node.op.name for node in model.outputs])

after that you can write a protobuf-file via

tensorflow.train.write_graph(min_graph, "/logdir/", "file.pb", as_text=True)

The output_node_names should contain the names of the graph nodes you intend to use for inference(e.g. softmax). It is used to extract the sub-graph that will be needed for inference. It may be useful to look at freeze_graph_test.


You can also use the tensorflow utility: summarize_graph to find possible output_nodes. From the official documentation:

Many of the transforms that the tool supports need to know what the input and output layers of the model are. The best source for these is the model training process, where for a classifier the inputs will be the nodes that receive the data from the training set, and the output will be the predictions. If you're unsure, the summarize_graph tool can inspect the model and provide guesses about likely input and output nodes, as well as other information that's useful for debugging.

It just needs the saved graph pb file as the input. Check the documentation for an example.


If output nodes are not explicitly specified when constructing a model in Keras, you can print them out like this:

[print(n.name) for n in tf.get_default_graph().as_graph_def().node]

Then all you need to do is find the right one, which often is similar to the name of activation function. You can just use this string name you've found as a value for output_node_names in freeze_graph function.