Tensorflow错误“无法散列的类型:'numpy.ndarray'" [英] Tensorflow error "unhashable type: 'numpy.ndarray'"
本文介绍了Tensorflow错误“无法散列的类型:'numpy.ndarray'"的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
import tensorflow as tf
import numpy as np
layer1_weight = tf.Variable(tf.zeros([2 , 3]))
layer1_bias = tf.Variable(tf.zeros([3 , 1]))
layer2_weight = tf.Variable(tf.zeros([3, 1]))
layer2_bias = tf.Variable(tf.constant([[0.]]))
input = tf.placeholder(tf.float32 , [2 , 1] )
result = tf.placeholder(tf.float32 ,[1 , 1] )
data_input = [np.float32([[0.],[0.]]) , np.float32([[0.],[1.]]) ,
np.float32([[1.],[0.]]) , np.float32([[1.],[1.]])]
data_output = [np.float32([[0.]]) , np.float32([[1.]]) ,
np.float32([[1.]]) , np.float32([[0.]])]
layer1_output = tf.add(tf.matmul(tf.transpose(layer1_weight) , input) ,
layer1_bias )
layer2_output = tf.add(tf.matmul(tf.transpose(layer2_weight) ,
layer1_output) , layer2_bias)
print (data_input[0])
loss = tf.square(tf.subtract(result , layer2_output))
optimizer = tf.train.GradientDescentOptimizer(0.0001)
train_step = optimizer.minimize(loss)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for i in range(30) :
j = int(i % 4)
result = data_output[j]
sess.run(train_step , feed_dict= {input : data_input[j] , result :
data_output[j]})
print(str(layer2_output))
代码返回错误
TypeError:不可散列的类型:'numpy.ndarray'
TypeError: unhashable type: 'numpy.ndarray'
在这里,我尝试用神经网络实现XOR门,但找不到错误.
Here I am trying to implement XOR gate with neural network but can't find error.
推荐答案
首先,您将result
定义为占位符,但随后将其重新定义为
result = data_output[j]
.这是当它出错时,因为您不能再将值提供给feed_dict
.
First you define result
to be a placeholder, but later redefine it as
result = data_output[j]
. This is when it gets wrong, because you can no longer feed the value to feed_dict
.
这篇关于Tensorflow错误“无法散列的类型:'numpy.ndarray'"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文