Tensorflow不会使用批处理终止 [英] Tensorflow does not terminate using batches

查看:88
本文介绍了Tensorflow不会使用批处理终止的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是使用tensorflow的新手,在处理它时有些挣扎。
我尝试使用类似于MNIS示例的softmax模型运行简单的分类工作。

I'm new at using tensorflow and have some struggle dealing with it. I try to run a simple classification work using the softmax model similar the MNIS example.

我尝试创建一批数据并将dem放入run方法中。
我的第一种方法是使用

I tried creating batches of my data and put dem into the run method. My first approach was using

sess.run(train_step, feed_dict={x: feature_batch, y_: labels_batch})

导致无法将张量放入feed_dict的错误。

which led to the error that tensors can't be put to feed_dict.

经过研究,我发现应该使用。

After some research, I found that I should use.

feat, lab = sess.run([feature_batch, feature_batch])
sess.run(train_step, feed_dict={x: feat, y_: lab})

尝试后,我的脚本不会终止计算,但也不会显示任何错误。

After trying it my script won't terminate calculation but does also not print any error.

有任何人一些提示为什么它不起作用?

Has anyone some hints why it is not working?

孔文件如下:

def input_pipeline(filename='dataset.csv', batch_size=30, num_epochs=None):
    filename_queue = tf.train.string_input_producer([filename], num_epochs=num_epochs, shuffle=True)
    features, labels = read_from_cvs(filename_queue)

    min_after_dequeue = 10000
    capacity = min_after_dequeue + 3 * batch_size
    feature_batch, label_batch = tf.train.shuffle_batch(
        [features, labels], batch_size=batch_size, capacity=capacity,
        min_after_dequeue=min_after_dequeue)
    return feature_batch, label_batch


def tensorflow():
    x = tf.placeholder(tf.float32, [None, num_attributes])
    W = tf.Variable(tf.zeros([num_attributes, num_types]))
    b = tf.Variable(tf.zeros([num_types]))

    y = tf.nn.softmax(tf.matmul(x, W) + b)
    y_ = tf.placeholder(tf.float32, [None, num_types])
    cross_entropy = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

    sess = tf.InteractiveSession()

    tf.global_variables_initializer().run()

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    feature_batch, label_batch = input_pipeline()

    for _ in range(1200):
        feat, lab = sess.run([feature_batch, feature_batch])
        sess.run(train_step, feed_dict={x: feat, y_: lab})

    coord.request_stop()
    coord.join(threads)

    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    #print(sess.run(accuracy, feed_dict={x: feature_batch, y_: label_batch}))


推荐答案

您可以在模型定义中直接使用张量。例如:

You can directly use tensors in your model definition. For example:

def tensorflow():
    x, y_ = input_pipeline()
    W = tf.Variable(tf.zeros([num_attributes, num_types]))
    b = tf.Variable(tf.zeros([num_types]))

    y = tf.nn.softmax(tf.matmul(x, W) + b)
    cross_entropy = tf.reduce_mean(
        tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

    sess = tf.InteractiveSession()

    tf.global_variables_initializer().run()

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)



    for _ in range(1200):
        sess.run(train_step)

或者您应该在 tf.train.shuffle_batch中使用占位符。例如:

#...omit
features_placeholder = tf.placeholder(...)
labels_placeholder = tf.placeholder(...)
x, y_ = tf.train.shuffle_batch(
        [features_placeholder, labels_placeholder], batch_size=batch_size, capacity=capacity,
        min_after_dequeue=min_after_dequeue)
W = tf.Variable(tf.zeros([num_attributes, num_types]))
b = tf.Variable(tf.zeros([num_types]))
#...omit
for _ in range(1200):
    sess.run(train_step, feed_dict={features_placeholder: ..., labels_placeholder: ...})

这篇关于Tensorflow不会使用批处理终止的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆