tensorflow - Inference and training versions of TF graphs -


what recommended way generate graphs use @ inference time vs. training time? basically, training, graph require sorts of components data input , augmentation including custom ops, while @ inference time whole subgraph can replaced placeholder.

how should typically set things up, if goal minimize size of inference time model? wouldn't want have link in of custom ops used training.

my main concern "right" way of doing this. guarantee can use tf.train.saver() restore training graph inference graph without compatibility issues?

a starting point ensure inference graph not have compatibility issues use metagraph. there detailed tutorial available https://www.tensorflow.org/programmers_guide/meta_graph.

  1. the primary reason recommending clear_devices flag available in tf.train.import_meta_graph can used remove device dependencies checkpoints.
  2. other benefits include ability retrieve hyper-parameters used during training , saving interesting operations (input placeholder) in collection easy retrieval.
  3. it helps code re-usability can write inference function not incorporate graph definition uses metagraph load same. found helpful if training model validated during training , used feature extractor too.

Comments

Popular posts from this blog

python Tkinter Capturing keyboard events save as one single string -

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

javascript - Z-index in d3.js -