您必须使用 dtype float(Tensorflow) 为占位符张量“Placeholder"提供一个值 [英] You must feed a value for placeholder tensor 'Placeholder' with dtype float(Tensorflow)

查看:28
本文介绍了您必须使用 dtype float(Tensorflow) 为占位符张量“Placeholder"提供一个值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

 将 tensorflow 导入为 tf导入操作系统导入 sklearn.preprocessing将熊猫导入为 pd将 numpy 导入为 np打印(os.getcwd())os.chdir("C:/Users/jbnu/Documents/양지성/Scholar/정규학기/3-2/데이터마이닝실습/프로젝트/현행/bank-additional)/bank-additional"

<块引用>

导入和管理数据集

bank = pd.read_csv("bank4.csv", index_col=False)tf.reset_default_graph()keep_prob = tf.placeholder(tf.float32)学习率 = 0.003x_data = bank.ix[:,0:9];打印(x_data)y_data = bank.ix[:, [-1]];打印(y_data)x_data = sklearn.preprocessing.scale(x_data).astype(np.float32);打印(x_data)y_data = y_data.astype(np.float32)

<块引用>

使用 3 层设置占位符和权重.

X = tf.placeholder(tf.float32, [None, 9]);打印(X)Y = tf.placeholder(tf.float32, [无, 1])# 第 1 层W1 = tf.get_variable("weight1", shape=[9,15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())b1 = tf.get_variable("bias1", shape=[15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())layer1 = tf.nn.relu(tf.matmul(X, W1) + b1)layer1 = tf.nn.dropout(layer1, keep_prob=keep_prob)# 第 2 层W2 = tf.get_variable("weight2", shape=[15,15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())b2 = tf.get_variable("bias2", shape=[15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())layer2 = tf.nn.relu(tf.matmul(layer1, W2) + b2)layer2 = tf.nn.dropout(layer2,keep_prob=keep_prob)# 第 3 层W3 = tf.get_variable("weight3", shape=[15,15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())b3 = tf.get_variable("bias3", shape=[15], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())layer3 = tf.nn.relu(tf.matmul(layer2, W3) + b3)layer3 = tf.nn.dropout(layer3,keep_prob=keep_prob)# 输出层W4 = tf.get_variable("weight4", shape=[15,1], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())b4 = tf.get_variable("bias4", shape=[1], dtype = tf.float32,初始值设定项=tf.contrib.layers.xavier_initializer())假设 = tf.sigmoid(tf.matmul(layer3, W4) + b4)假设 = tf.nn.dropout(假设,keep_prob=keep_prob)

<块引用>

定义成本函数和优化器.

cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 -假设))train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)预测 = tf.cast(假设 > 0.5,dtype=tf.float32)精度 = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))

<块引用>

训练和准确率测试

# 启动图使用 tf.Session() 作为 sess:sess.run(tf.global_variables_initializer())对于范围内的步骤(10001):sess.run(train, feed_dict={X: x_data, Y: y_data})如果步骤 % 1000 == 0:打印(步骤:",步骤,sess.run(成本,feed_dict={X:x_data,Y:y_data}),sep =\n")# 准确率报告h, c, a = sess.run([假设,预测,准确度],feed_dict={X: x_data, Y: y_data})print("\n假设:", h, "\n正确:", c, "\n准确度:", a)

我不知道为什么我的神经网络不工作.

我经常收到一条消息你必须用 dtype float 为占位符张量‘Placeholder’提供一个值",尽管它们都是 float32.

此外,我的辍学率遇到了 feed_dict 错误.请运行代码并告诉我有什么问题.

解决方案

它在抱怨 dropout keep_prob placeholder:

keep_prob = tf.placeholder(tf.float32)

您应该在 feed_dict 以及 XY 中提供它,或者使其成为 tf.placeholder_with_default,如果你不想一直通过它.>

import tensorflow as tf
import os
import sklearn.preprocessing
import pandas as pd
import numpy as np

print(os.getcwd())
os.chdir("C:/Users/jbnu/Documents/양지성/Scholar/정규학기/3-2/데이터마이닝실습/프로젝트/현행/bank-additional/bank-additional")

Importing and managing datasets

bank = pd.read_csv("bank4.csv", index_col=False)

tf.reset_default_graph()
keep_prob = tf.placeholder(tf.float32)
learning_rate = 0.003

x_data = bank.ix[:,0:9]; print(x_data)
y_data = bank.ix[:, [-1]]; print(y_data)
x_data = sklearn.preprocessing.scale(x_data).astype(np.float32); print(x_data)
y_data = y_data.astype(np.float32)

Setting placeholder and weights with 3 layers.

X = tf.placeholder(tf.float32, [None, 9]); print(X)
Y = tf.placeholder(tf.float32, [None, 1])

# Layer 1
W1 = tf.get_variable("weight1", shape=[9,15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable("bias1", shape=[15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
layer1 = tf.nn.relu(tf.matmul(X, W1) + b1)
layer1 = tf.nn.dropout(layer1, keep_prob=keep_prob)

# Layer 2
W2 = tf.get_variable("weight2", shape=[15,15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable("bias2", shape=[15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
layer2 = tf.nn.relu(tf.matmul(layer1, W2) + b2)
layer2 = tf.nn.dropout(layer2, keep_prob=keep_prob)

# Layer 3
W3 = tf.get_variable("weight3", shape=[15,15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b3 = tf.get_variable("bias3", shape=[15], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
layer3 = tf.nn.relu(tf.matmul(layer2, W3) + b3)
layer3 = tf.nn.dropout(layer3, keep_prob=keep_prob)

# Output Layer
W4 = tf.get_variable("weight4", shape=[15,1], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
b4 = tf.get_variable("bias4", shape=[1], dtype = tf.float32,
                     initializer=tf.contrib.layers.xavier_initializer())
hypothesis = tf.sigmoid(tf.matmul(layer3, W4) + b4)
hypothesis = tf.nn.dropout(hypothesis, keep_prob=keep_prob)

Defining cost function and optimizer.

cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis))

train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))

Training and accuracy test

# Launch graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for step in range(10001):
        sess.run(train, feed_dict={X: x_data, Y: y_data})
        if step % 1000 == 0:
            print("step: ", step, sess.run(cost, feed_dict={X: x_data, Y: y_data}), sep="\n")

    # Accuracy report
    h, c, a = sess.run([hypothesis, predicted, accuracy],
                       feed_dict={X: x_data, Y: y_data})
    print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a)

I have no idea why my NN is not working.

I constantly get a message "You must feed a value for placeholder tensor 'Placeholder' with dtype float" though all of them are float32.

Also, my dropout rate encounters feed_dict error. Please run the code and tell me what's wrong.

解决方案

It's complaining about dropout keep_prob placeholder:

keep_prob = tf.placeholder(tf.float32)

You should either provide it in feed_dict along with X and Y or make it a tf.placeholder_with_default, if you don't want to pass it all the time.

这篇关于您必须使用 dtype float(Tensorflow) 为占位符张量“Placeholder"提供一个值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆