1. 程式人生 > >keras模型匯出到tensorflow,以及pb模型匯出

keras模型匯出到tensorflow,以及pb模型匯出

1 keras模型匯出成tf模型

https://github.com/amir-abdi/keras_to_tensorflow/blob/master/keras_to_tensorflow.ipynb
constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
pred_node_names 為keras模型輸出的名字,可用model.output.name拿到
graph_io.write_graph(constant_graph, output_fld
, output_graph_name, as_text=False)
函式定義:
write_graph(
graph_or_graph_def,
logdir,
name,
as_text=True
)
函式returns:the path of the output proto file
The graph is written as a binary proto unless as_text is True
注意:
調整keras的learning_phase為0,表示為測試模式
from keras import backend as K
K.set_learning_phase(0)  
# all new operations will be in test mode from now on# serialize the model and get its weights, for quick re-buildingconfig = previous_model.get_config()weights = previous_model.get_weights()# re-build a model where the learning phase is now hard-coded to 0
from keras.models import model_from_config
new_model = model_from_config(config)
new_model.set_weights(weights)
由此發現write_graph中model是由輸出決定的,即不同的輸出對應不同的model
graph_def = g_2.as_graph_def()
tf.train.write_graph(graph_def, export_dir, 'expert-graph.pb', as_text=False)
猜測:此種只保留了圖的結構
2 pb模型匯出
with open(output_graph_path, "rb") as f:
        output_graph_def = tf.GraphDef()
        output_graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(output_graph_def, name="")
with tf.Session() as sess:
        tf.initialize_all_variables().run()
        input_x = sess.graph.get_tensor_by_name("input:0")
#和模型的輸入名字有關,input是模型的輸入tensor
        print input_x
        output = sess.graph.get_tensor_by_name("output:0")
        print output
#和模型的輸出名字有關,output是模型的輸出tensor
        y_conv_2 = sess.run(output,{input_x:mnist.test.images})
        print "y_conv_2", y_conv_2