张量流中的权重和偏差初始化 [英] Weight and bias initialization in tensorflow

查看:72
本文介绍了张量流中的权重和偏差初始化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在做一些电力负荷预测,我想在其中初始化权重和偏差.我已经使用不同的算法计算了体重和偏倚,并将其保存在文件中.我想使用该文件,并使用这些体重和偏见开始训练.

I'm doing some electricity load forecasting in which I want to initialize the weight and bias. I have calculated weight and bias using different algorithms and saved it in a file. I want to use that file and start my training using those weight and biases.

这是我要更新的代码.

#RNN designning
tf.reset_default_graph()

inputs = 1  #input vector size
hidden = 100    
output = 1  #output vector size

X = tf.placeholder(tf.float32, [None, num_periods, inputs])
y = tf.placeholder(tf.float32, [None, num_periods, output])


basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=hidden, activation=tf.nn.relu)
rnn_output, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)

learning_rate = 0.001   #small learning rate so we don't overshoot the minimum

stacked_rnn_output = tf.reshape(rnn_output, [-1, hidden])           #change the form into a tensor
stacked_outputs = tf.layers.dense(stacked_rnn_output, output)        #specify the type of layer (dense)
outputs = tf.reshape(stacked_outputs, [-1, num_periods, output])          #shape of results

loss = tf.reduce_mean(tf.square(outputs - y))    #define the cost function which evaluates the quality of our model
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)          #gradient descent method
training_op = optimizer.minimize(loss)          #train the result of the application of the cost_function                                 

init = tf.global_variables_initializer()           #initialize all the variables
epochs = 1000     #number of iterations or training cycles, includes both the FeedFoward and Backpropogation
mape = []

def mean_absolute_percentage_error(y_true, y_pred): 
    y_true, y_pred = np.array(y_true), np.array(y_pred)
    return np.mean(np.abs((y_true - y_pred) / y_true)) * 100

y_pred = {'NSW': [], 'QLD': [], 'SA': [], 'TAS': [], 'VIC': []}

for st in state.values():
    print("State: ", st, end='\n')
    with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
        init.run()
        for ep in range(epochs):
            sess.run(training_op, feed_dict={X: x_batches[st], y: y_batches[st]})
            if ep % 100 == 0:
                mse = loss.eval(feed_dict={X: x_batches[st], y: y_batches[st]})
                print(ep, "MSE:", mse)
        y_pred[st] = sess.run(outputs, feed_dict={X: x_batches_test[st]})
    print("\n")

我正在使用以下算法找到权重和偏差,并将其保存在weightsbiases中作为列表列表.

I'm finding the weights and biases using following algo and saving it in weights and biases as a list of list.

class network:
    def set_weight_bias(self, a):
        lIt = 0
        rIt = 0
        self.weights = []
        self.biases = []
        for x,y in zip(self.sizes[1:], self.sizes[:-1]):
            rIt += x*y
            self.weights.append(a[lIt:rIt].reshape((x,y)))
            lIt = rIt
        for x in self.sizes[1:]:
            rIt += x
            self.biases.append(a[lIt:rIt].reshape((x,1)))
            lIt = rIt

    ...
    """
    Cuckoo Search Optimization
    """

    def objectiveFunction(self,x):
        self.set_weight_bias(x)
        y_prime = self.feedforward(self.input)
        return sum(abs(u-v) for u,v in zip(y_prime, self.output))/x.shape[0]

    def cso(self, n, x, y, function, lb, ub, dimension, iteration, pa=0.25,
                 nest=100):
        """
        :param n: number of agents
        :param function: test function
        :param lb: lower limits for plot axes
        :param ub: upper limits for plot axes
        :param dimension: space dimension
        :param iteration: number of iterations
        :param pa: probability of cuckoo's egg detection (default value is 0.25)
        :param nest: number of nests (default value is 100)
        """
        ...

我想使用自定义权重和偏差开始训练,而不是通过tensorflow随机分配权重和偏差.如何在张量流中做到这一点?

I want to use custom weights and biases to start my training instead of randomly assigned weights and biases by tensorflow. How to do that in tensorflow?

推荐答案

是否要为RNN单元或密集层设置权重?如果用于RNN单元,则应该能够使用

Do you want to set weights for the RNN Cell or for the Dense layer? If it's for the RNN cell, you should be able to set the weights using the set_weights method.

如果用于密集层,则您应该能够分配 Variable 并使用initializer参数传递您的权重(以及其他偏见).然后,当您调用layers.dense时,可以将变量张量分别传递给kernel_initializerbias_initializer以获得权重和偏差.

If it's for the Dense layer, you should be able to assign a Variable and use the initializer argument to pass your weights (and another for the bias'). Then, when you call layers.dense, you can pass both your variable tensors to kernel_initializer and bias_initializer for weights and biases respectively.

这篇关于张量流中的权重和偏差初始化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆