在训练期间如何切换tf.train.Optimizers? [英] How do I switch tf.train.Optimizers during training?

查看:176
本文介绍了在训练期间如何切换tf.train.Optimizers?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想从 Adam 切换到 SGD .如何顺利进行操作,以便将权重/梯度传递给新的优化器?

I want to switch from Adam to SGD after a certain number of epochs. How do I do this smoothly so that the weights/gradients are passed over to the new optimizer?

推荐答案

只需定义两个优化器并在它们之间切换:

Just define two optimizers and switch between them:

sgd_optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
adap_optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
...
for epoch in range(100):
  for (x, y) in zip(train_X, train_Y):
    optimizer = sgd_optimizer if epoch > 50 else adap_optimizer
    sess.run(optimizer, feed_dict={X: x, Y: y})

优化器仅封装将梯度应用于张量的方法,并且可能仅包含一些自己的变量.模型权重未存储在优化器中,因此您可以轻松切换它们.

An optimizer only encapsulates the way to apply the gradients to the tensors, and may hold just a few own variables. The model weights are not stored in the optimizers, so you can switch them easily.

这篇关于在训练期间如何切换tf.train.Optimizers?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆