Tensorflow serving如何設計客戶端請求的資料格式?--TF.Example與Tensor的喂入
阿新 • • 發佈:2018-12-21
模型儲存
在儲存模型的時候,可以通過定義export方法的屬性,指定模型提供服務時的資料格式,主要的有兩種,一種是原生的tensor,一種是喂入序列化的tf.example資料。
方法一: build_parsing_serving_input_receiver_fn喂入Example
@tf_export('estimator.export.build_parsing_serving_input_receiver_fn') def build_parsing_serving_input_receiver_fn(feature_spec,default_batch_size=None): """Build a serving_input_receiver_fn expecting fed tf.Examples. Creates a serving_input_receiver_fn that expects a serialized tf.Example fed into a string placeholder. The function parses the tf.Example according to the provided feature_spec, and returns all parsed Tensors as features. Args: feature_spec: a dict of string to `VarLenFeature`/`FixedLenFeature`. default_batch_size: the number of query examples expected per batch. Leave unset for variable batch size (recommended). Returns: A serving_input_receiver_fn suitable for use in serving. """ def serving_input_receiver_fn(): """An input_fn that expects a serialized tf.Example.""" serialized_tf_example = array_ops.placeholder(dtype=dtypes.string, shape=[default_batch_size], name='input_example_tensor') receiver_tensors = {'examples': serialized_tf_example} features = parsing_ops.parse_example(serialized_tf_example, feature_spec) return ServingInputReceiver(features, receiver_tensors) return serving_input_receiver_fn
- parsing_ops.parse_example: 把序列化的Example proto解析成tensors的詞典。這裡序列化指的是一個batch的獨立的Example的protos:
serialized = [ features { feature { key: "ft" value { float_list { value: [1.0, 2.0] } } } }, features { feature []}, features { feature { key: "ft" value { float_list { value: [3.0] } } } ]
注意!!!:這裡,官方文件並沒有詳細指出實際在客戶端輸入的時候應該喂入什麼樣的格式,實際上,並不能直接喂入ExampleProto,而是應該在客戶端自己轉換成TensorProto,把ExampleProto的格式轉換成bytes的StringVal加入到TensorProto中去。 方法二: def build_raw_serving_input_receiver_fn(features, default_batch_size=None): 直接把資料喂入TensorProto傳送給客戶端