在TensorFlow中执行功能 [英] executing function in TensorFlow

查看:299
本文介绍了在TensorFlow中执行功能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对此我有一些疑问代码-神经TensorFlow中的网络.

I have some questions regarding this Code - neural networks in TensorFlow.

#!/usr/bin/env python

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data


def init_weights(shape):
    return tf.Variable(tf.random_normal(shape, stddev=0.01))


def model(X, w_h, w_o):
    h = tf.nn.sigmoid(tf.matmul(X, w_h)) # this is a basic mlp, think 2 stacked logistic regressions
    return tf.matmul(h, w_o) # note that we dont take the softmax at the end because our cost fn does that for us


mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
trX, trY, teX, teY = mnist.train.images, mnist.train.labels, mnist.test.images, mnist.test.labels

X = tf.placeholder("float", [None, 784])
Y = tf.placeholder("float", [None, 10])

w_h = init_weights([784, 625]) # create symbolic variables
w_o = init_weights([625, 10])

py_x = model(X, w_h, w_o)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=py_x, labels=Y)) # compute costs
train_op = tf.train.GradientDescentOptimizer(0.05).minimize(cost) # construct an optimizer
predict_op = tf.argmax(py_x, 1)

# Launch the graph in a session
with tf.Session() as sess:
    # you need to initialize all variables
    tf.global_variables_initializer().run()

    for i in range(100):
        for start, end in zip(range(0, len(trX), 128), range(128, len(trX)+1, 128)):
            sess.run(train_op, feed_dict={X: trX[start:end], Y: trY[start:end]})
        print(i, np.mean(np.argmax(teY, axis=1) ==
                         sess.run(predict_op, feed_dict={X: teX})))

  • 在第37行循环运行一次后,如何使用 X [0] 和新学习的调用 model() w_h w_o ,这样我就可以看到函数返回

    • After a single run of the loop at line 37, how do i call model() with X[0] and the newly learnt w_h and w_o , so that I can see the function returns

      类似地,如何在 model()函数内打印 h 的值?

      Similarly, how to print the values of h inside the model() function ?

      先谢谢了.我是tensorFlow的新手:)

      Thanks in advance. I am new to tensorFlow :)

      推荐答案

      feed_dict将占位符转换为实际值.因此,为feed_dicts提供一个条目并评估py_x.

      The feed_dict translates the placeholders into real value(s). So provide a single entry for feed_dicts and evaluate py_x.

      以下应能工作:

      对于结果(px_y):

      print(sess.run(py_x, feed_dict={X: [yoursample]}))
      

      对于h ,(几乎)相同.但是,就像链接的代码hmodel()的私有成员一样,您需要对h的引用才能对其进行评估.最可能的最简单方法是替换行:

      For h it's (almost) the same. But as in the linked code h is a private member of model() you'll need a reference to h in order to evaluate it. The easiest way most probably is to replace lines:

      (14) return tf.matmul(h, w_o)
      with
      (14) return (tf.matmul(h, w_o), h)
      
      (26) py_x = model(X, w_h, w_o)
      with
      (26) py_x, h = model(X, w_h, w_o)
      

      并使用:

      print(sess.run(h, feed_dict={X: [yoursample]}))
      

      或(评估多个变量):

      py_val, h_val = sess.run([py_x, h], feed_dict={X: [yoursample]})
      print(py_val, h)
      

      说明: 顺便说一句,我们告诉Tensorflow我们的网络是如何构建的,我们不需要显式引用(内部/隐藏)变量h.但是为了评估它,我们确实需要引用来定义要准确评估的内容.

      Explanation: By the way we told Tensorflow how our Network is constructed we did not need an explicit reference to the (inner/hidden) variable h. But in order to evaluate it we do need the reference to define what exactly to evaluate.

      还有其他方法可以从Tensorflow的胆量中获取变量,但是当我们在上面几行显式创建此变量时,我会避免将某些内容放到黑盒中,以后再询问​​相同的黑盒来给出回来了.

      There are other ways to get the variables out of the guts of Tensorflow, but as we explicitly create this variable a few lines above I'd avoid dropping something into a black box and ask the very same black box later on to give it back.

      这篇关于在TensorFlow中执行功能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆