Tensorflow Saver似乎会覆盖现有的已保存变量文件 [英] Tensorflow saver seems to overwrite existing saved variable files

查看:512
本文介绍了Tensorflow Saver似乎会覆盖现有的已保存变量文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在用Tensorflow编写神经网络代码.我做到了在每1000个纪元中保存变量.因此,我希望为不同的文件保存第1001个纪元,第2001个纪元,第3001个纪元……的变量. 下面的代码是我所做的保存功能.

I am writing neural network code in tensorflow. I made it to save variables in every 1000 epoch. So, I expect to save variables of 1001th epoch, 2001th epoch, 3001th epoch ... for different files. The code below is the save function I made.

def save(self, epoch):
    model_name = "MODEL_save"
    checkpoint_dir = os.path.join(model_name)

    if not os.path.exists(checkpoint_dir):
        os.makedirs(checkpoint_dir)
    self.saver.save(self.sess, checkpoint_dir + '/model', global_step=epoch)
    self.saver.save(self.sess, checkpoint_dir + '/model')
    print("path for saved %s" % checkpoint_dir)

调用该函数后,我将该代码保存了两次.因为我想使用'global_step = epoch'保存每1000个时代的变量历史.并希望将最新的变量保存在文件中而未指定时期. 每当满足纪元条件时,我都会调用此函数,如下所示.

I made this code to save two times once the function is called. Because I wanted to save history of variables for every 1000 epoch by using 'global_step=epoch'. And wanted to save latest variables in the file without epoch specified. I call this function whenever the epoch condition is met like below.

for epoch in xrange(self.m_total_epoch):

    .... CODE FOR NEURAL NETWORK ....

    if epoch%1000 == 1 and epoch != 1:
        self.save(epoch)

假设当前时间段是29326,我希望目录中所有已保存的文件都来自1001、2001、3001 ...29001.但是,只有26001、27001、28001、29001的文件中有一部分.我检查了是否发生了在其他计算机上.这与我的预期不同.为什么会发生?

Assuming current epoch is 29326, I expect all the saved files in the directory from 1001, 2001, 3001 ... 29001. However, there are only partial of files from 26001, 27001, 28001, 29001. I checked it happened in other computers. It is different from what I expected. Why does it happen?

推荐答案

tf.train.Saver在其构造函数中有一个max_to_keep参数,该参数仅保留最新的模型.令人惊讶的是,此max_to_keep参数的默认值为5.因此,默认情况下,您将只有最新的5个模型.

tf.train.Saver has a max_to_keep argument in its constructor that keeps only the latest models saved. And this max_to_keep argument, somewhat suprisingly, has a default value of 5. So by default, you will only have the latest 5 models.

要保留所有模型,请将此变量设置为None:

To keep all models, set this variable to None:

saver = tf.train.Saver(max_to_keep=None)

这篇关于Tensorflow Saver似乎会覆盖现有的已保存变量文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆