Tensorflow Relu的误解 [英] Tensorflow Relu Misunderstanding

查看:76
本文介绍了Tensorflow Relu的误解的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我最近正在上一门Udacity深度学习课程,该课程基于 TensorFlow 。我有一个简单的 MNIST 程序,准确度约为92%:

I've recently been doing a Udacity Deep Learning course which is based around TensorFlow. I have a simple MNIST program which is about 92% accurate:



from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf

mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

y = tf.nn.softmax(tf.matmul(x, W) + b)

y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

init = tf.initialize_all_variables()

sess = tf.Session()
sess.run(init)

for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})) 

我将其分配给,将带有SGD的逻辑回归示例转换为具有隐藏线性单元nn.relu()和1024个隐藏节点的1-隐藏层神经网络

我对此有一个心理障碍。目前,我有一个784 x 10的权重矩阵和一个10个元素的长偏差矢量。我不明白如何将所得的10个元素矢量从 WX +偏置连接到1024 Relu s。

I am having a mental block about this. Currently I have a 784 x 10 Matrix of weights, and a 10 element long bias vector. I don't understand how I connect the resulting 10 element vector from WX + Bias to 1024 Relus.

如果有人可以向我解释这一点,我将非常感激。

If anyone could explain this to me I'd be very grateful.

推荐答案

现在您有类似的东西

,您需要这样的东西

(此图缺少ReLU层,该层在+ b1)

(this diagram is missing ReLU layer which goes after +b1)

这篇关于Tensorflow Relu的误解的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆