Tensorflow Hub:微调和评估 [英] Tensorflow Hub: Fine-tune and evaluate

查看:119
本文介绍了Tensorflow Hub:微调和评估的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我要微调Tensorflow Hub图像特征向量模块之一.出现问题是因为要微调模块,需要执行以下操作:

Let's say that I want to fine tune one of the Tensorflow Hub image feature vector modules. The problem arises because in order to fine-tune a module, the following needs to be done:

module = hub.Module("https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/3", trainable=True, tags={"train"})

假定模块为Resnet50.

换句话说,导入模块时将trainable标志设置为True,并使用train tag.现在,如果要验证模型(对验证集进行推断以测量模型的性能),由于train tagtrainable标志,我无法关闭批处理规范

In other words, the module is imported with the trainable flag set as True and with the train tag. Now, in case I want to validate the model (perform inference on the validation set in order to measure the performance of the model), I can't switch off the batch-norm because of the train tag and the trainable flag.

请注意,这里已经有人问过这个问题了 Tensorflow中心进行微调和评估,但未提供答案.

Please note that this question has already been asked here Tensorflow hub fine-tune and evaluate but no answer has been provided.

我还提出了一个有关此问题的Github问题问题.

期待您的帮助!

推荐答案

对于TF1,使用hub.Module时,情况就如您所说的:实例化训练图或推理图,并且没有导入这两者的好方法并在一个tf.Session中共享它们之间的变量.这是由Estimators和TF1中的许多其他培训脚本(尤其是分布式脚本)使用的方法所告知的:有一个培训课程可生成检查点,而单独的评估课程可从中恢复模型权重. (两者在读取的数据集和执行的预处理方面也可能会有所不同.)

With hub.Module for TF1, the situation is as you say: either the training or the inference graph is instantiated, and there is no good way to import both and share variables between them in a single tf.Session. That's informed by the approach used by Estimators and many other training scripts in TF1 (esp. distributed ones): there's a training Session that produces checkpoints, and a separate evaluation Session that restores model weights from them. (The two will likely also differ in the dataset they read and the preprocessing they perform.)

有了TF2及其对急切"模式的强调,这种情况已经改变. TF2样式的集线器模块(位于 https://tfhub.dev/s?q=tf2中-preview )实际上只是 TF2-样式SavedModels ,并且它们没有多个图形版本.相反,如果需要训练/推理区别,则还原的顶级对象上的__call__函数将使用可选的training=...参数.

With TF2 and its emphasis on Eager mode, this has changed. TF2-style Hub modules (as found at https://tfhub.dev/s?q=tf2-preview) are really just TF2-style SavedModels, and these don't come with multiple graph versions. Instead, the __call__ function on the restored top-level object takes an optional training=... parameter if the train/inference distinction is required.

有了这个,TF2应该符合您的期望.参见交互式演示 tf2_image_retraining.ipynb tensorflow_hub/keras_layer.py 中的基础代码>如何做到这一点. TF Hub团队正在努力为TF2版本提供更完整的模块选择.

With this, TF2 should match your expectations. See the interactive demo tf2_image_retraining.ipynb and the underlying code in tensorflow_hub/keras_layer.py for how it can be done. The TF Hub team is working on making more complete selection of modules available for the TF2 release.

这篇关于Tensorflow Hub:微调和评估的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆