在 Tensorflow 中恢复检查点时如何获取 global_step? [英] How to get the global_step when restoring checkpoints in Tensorflow?

查看:47
本文介绍了在 Tensorflow 中恢复检查点时如何获取 global_step?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在像这样保存我的会话状态:

I'm saving my session state like so:

self._saver = tf.saver()
self._saver.save(self._session, '/network', global_step=self._time)

当我稍后恢复时,我想获取我从中恢复的检查点的 global_step 值.这是为了从中设置一些超参数.

When I later restore I want to get the value of the global_step for the checkpoint I restore from. This is in order to set some hyper parameters from it.

执行此操作的hacky 方法是遍历并解析检查点目录中的文件名.但是,必须有更好的内置方式来做到这一点吗?

The hacky way to do this would be to run through and parse the file names in the checkpoint directory. But surly there has to be a better, built in way to do this?

推荐答案

一般模式是有一个 global_step 变量来跟踪步骤

General pattern is to have a global_step variable to keep track of steps

global_step = tf.Variable(0, name='global_step', trainable=False)
train_op = optimizer.minimize(loss, global_step=global_step)

然后你可以用

saver.save(sess, save_path, global_step=global_step)

恢复时,global_step的值也会恢复

这篇关于在 Tensorflow 中恢复检查点时如何获取 global_step?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆