有没有办法在不重新编译Theano的情况下更改函数的更新列表? [英] Is there a way to change a function's update list without re-compiling it in Theano?

查看:211
本文介绍了有没有办法在不重新编译Theano的情况下更改函数的更新列表?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

实际上,我想改变不同训练阶段的学习率。例如:

 我在范围内(iter_num):
learn_rate = i * alpha
do_training学习速度,......)

显然,为每次迭代重新编译一个新函数将会太慢。
所以我想知道在Theano中有更好的方法吗?
Thanks!

解决方案

您可以将学习率设为符号变量,并将其传递到如下所示的训练函数中:

 进口numpy 
进口theano
将theano.tensor导入为t


def compile(input_size,hidden_​​size,output_size):
W_h = theano.shared(numpy.random.standard_normal(size =(input_size,hidden_​​size))。astype(theano.config.floatX))
b_h = theano.shared(numpy.zeros((hidden_​​size,),dtype = theano.config.floatX))
W_y = theano.shared(numpy.random.standard_normal(size =(hidden_​​size,output_size)) .astype(theano.config.floatX))
b_y = theano.shared(numpy.zeros((output_size,),dtype = theano.config.floatX))

x = tt.matrix ('x')
z = tt.ivector('z')
learning_rate = tt.scalar()
h = tt.tanh(theano.dot(x,W_h)+ b_h)
y = tt.nnet.softmax(theano.dot(h,W_y)+ b_y)
cost = tt.nnet.categorical_crossentropy(y, z).mean()
updates = [(p,p - learning_rate * tt.grad(cost,p))for p in(W_h,b_h,W_y,b_y)]
return theano.function ([x,z,learning_rate],outputs = cost,updates = updates)

$ b def main():
input_size = 5
hidden_​​size = 4
output_size = 3
train = compile(input_size,hidden_​​size,output_size)
print train([[0,1,2,3,4],[5,6,7,8,9] ],[1,2],0.1)


main()

请注意,训练功能现在有三个参数;第三是学习率。

Indeed I want to change the learning rate at different periods of training. Something like:

for i in range(iter_num):
    learn_rate = i*alpha
    do_training(learn_rate,...)

Apparently recompiling a new function for every iteration is going to be too slow. So I was wondering is there a better way to do it in Theano? Thanks!

解决方案

You can make the learning rate a symbolic variable and pass it into the training function like this:

import numpy
import theano
import theano.tensor as tt


def compile(input_size, hidden_size, output_size):
    W_h = theano.shared(numpy.random.standard_normal(size=(input_size, hidden_size)).astype(theano.config.floatX))
    b_h = theano.shared(numpy.zeros((hidden_size,), dtype=theano.config.floatX))
    W_y = theano.shared(numpy.random.standard_normal(size=(hidden_size, output_size)).astype(theano.config.floatX))
    b_y = theano.shared(numpy.zeros((output_size,), dtype=theano.config.floatX))

    x = tt.matrix('x')
    z = tt.ivector('z')
    learning_rate = tt.scalar()
    h = tt.tanh(theano.dot(x, W_h) + b_h)
    y = tt.nnet.softmax(theano.dot(h, W_y) + b_y)
    cost = tt.nnet.categorical_crossentropy(y, z).mean()
    updates = [(p, p - learning_rate * tt.grad(cost, p)) for p in (W_h, b_h, W_y, b_y)]
    return theano.function([x, z, learning_rate], outputs=cost, updates=updates)


def main():
    input_size = 5
    hidden_size = 4
    output_size = 3
    train = compile(input_size, hidden_size, output_size)
    print train([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], [1, 2], 0.1)


main()

Note that the training function now has three parameters; the third is the learning rate.

这篇关于有没有办法在不重新编译Theano的情况下更改函数的更新列表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆