ValueError:具有形状 (3,1) 的不可广播输出操作数与广播形状 (3,4) 不匹配 [英] ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
问题描述
我最近开始在 YouTube 上学习 Siraj Raval 的深度学习教程,但是当我尝试运行我的代码时出现错误.代码来自他的系列的第二集,如何制作神经网络.当我运行代码时出现错误:
回溯(最近一次调用最后一次):文件C:UsersdpoppDocumentsMachine Learningfirst_neural_net.py",第 66 行,在 <module> 中.神经网络.训练(training_set_inputs,training_set_outputs,10000)文件C:UsersdpoppDocumentsMachine Learningfirst_neural_net.py",第 44 行,火车中self.synaptic_weights += 调整ValueError:具有形状 (3,1) 的不可广播输出操作数与广播形状 (3,4) 不匹配
我用他的代码检查了多次,没有发现任何差异,甚至尝试从 GitHub 链接复制和粘贴他的代码.这是我现在拥有的代码:
from numpy import exp, array, random, dot类神经网络():def __init__(self):# 种子随机数生成器,所以它生成相同的数字# 每次程序运行时.随机种子(1)# 我们对单个神经元进行建模,具有 3 个输入连接和 1 个输出连接.# 我们将随机权重分配给一个 3 x 1 矩阵,其值在 -1 到 1 的范围内# 并表示 0.self.synaptic_weights = 2 * random.random((3, 1)) - 1# Sigmoid 函数,描述 S 形曲线.# 我们通过这个函数将输入的加权和传递给# 在 0 和 1 之间将它们归一化.def __sigmoid(self, x):返回 1/(1 + exp(-x))# Sigmoid 函数的导数.# 这是 Sigmoid 曲线的梯度.# 它表明我们对现有权重的信心.def __sigmoid_derivative(self, x):返回 x * (1 - x)# 我们通过反复试验来训练神经网络.# 每次调整突触权重.def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):对于范围内的迭代(number_of_training_iterations):# 通过我们的神经网络(单个神经元)传递训练集.输出 = self.think(training_set_inputs)# 计算误差(期望输出之间的差异# 和预测输出).错误 = training_set_outputs - 输出# 将误差乘以输入,再乘以 Sigmoid 曲线的梯度.# 这意味着更不自信的权重被调整得更多.# 这意味着输入为零,不会导致权重发生变化.调整 = 点(training_set_inputs.T,错误 * self.__sigmoid_derivative(输出))# 调整权重.self.synaptic_weights += 调整# 神经网络思考.定义思考(自我,输入):# 通过我们的神经网络(我们的单个神经元)传递输入.返回 self.__sigmoid(dot(inputs, self.synaptic_weights))如果 __name__ == '__main__':# 初始化单个神经元神经网络神经网络 = 神经网络()print("随机起始突触权重:")打印(neural_network.synaptic_weights)# 训练集.我们有 4 个示例,每个示例由 3 个输入值组成# 和 1 个输出值.training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])training_set_outputs = 数组([[0, 1, 1, 0]])# 使用训练集训练神经网络# 做 10,000 次,每次做小调整神经网络.训练(training_set_inputs,training_set_outputs,10000)打印(训练后的新突触权重:")打印(neural_network.synaptic_weights)# 在新情况下测试神经网络print("考虑新情况 [1, 0, 0] -> ?:")打印(neural_network.think(数组([[1, 0, 0]])))
即使复制并粘贴了在 Siraj 的剧集中工作的相同代码,我仍然遇到同样的错误.
我刚开始研究人工智能,不明白错误意味着什么.有人可以解释一下这是什么意思以及如何解决吗?谢谢!
将self.synaptic_weights += adjust
改为
self.synaptic_weights = self.synaptic_weights + 调整
<小时>
self.synaptic_weights
的形状必须为 (3,1),而 adjustment
的形状必须为 (3,4).虽然形状是可广播 numpy一定不喜欢尝试将形状为 (3,4) 的结果分配给形状为 (3,1) 的数组
a = np.ones((3,1))b = np.random.randint(1,10, (3,4))>>>一种数组([[1],[1],[1]])>>>乙数组([[8, 2, 5, 7],[2, 5, 4, 8],[7, 7, 6, 6]])>>>a + b数组([[9, 3, 6, 8],[3, 6, 5, 9],[8, 8, 7, 7]])>>>b += a>>>乙数组([[9, 3, 6, 8],[3, 6, 5, 9],[8, 8, 7, 7]])>>>一种数组([[1],[1],[1]])>>>a += b回溯(最近一次调用最后一次):文件<pyshell#24>",第 1 行,在 <module> 中a += bValueError:具有形状 (3,1) 的不可广播输出操作数与广播形状 (3,4) 不匹配
使用 numpy 时会出现同样的错误.添加并指定a
作为输出数组
需要创建一个新的a
I recently started to follow along with Siraj Raval's Deep Learning tutorials on YouTube, but I an error came up when I tried to run my code. The code is from the second episode of his series, How To Make A Neural Network. When I ran the code I got the error:
Traceback (most recent call last):
File "C:UsersdpoppDocumentsMachine Learningfirst_neural_net.py", line 66, in <module>
neural_network.train(training_set_inputs, training_set_outputs, 10000)
File "C:UsersdpoppDocumentsMachine Learningfirst_neural_net.py", line 44, in train
self.synaptic_weights += adjustment
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
I checked multiple times with his code and couldn't find any differences, and even tried copying and pasting his code from the GitHub link. This is the code I have now:
from numpy import exp, array, random, dot
class NeuralNetwork():
def __init__(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
random.seed(1)
# We model a single neuron, with 3 input connections and 1 output connection.
# We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
# and mean 0.
self.synaptic_weights = 2 * random.random((3, 1)) - 1
# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def __sigmoid(self, x):
return 1 / (1 + exp(-x))
# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def __sigmoid_derivative(self, x):
return x * (1 - x)
# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):
for iteration in range(number_of_training_iterations):
# Pass the training set through our neural network (a single neuron).
output = self.think(training_set_inputs)
# Calculate the error (The difference between the desired output
# and the predicted output).
error = training_set_outputs - output
# Multiply the error by the input and again by the gradient of the Sigmoid curve.
# This means less confident weights are adjusted more.
# This means inputs, which are zero, do not cause changes to the weights.
adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))
# Adjust the weights.
self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
# Pass inputs through our neural network (our single neuron).
return self.__sigmoid(dot(inputs, self.synaptic_weights))
if __name__ == '__main__':
# Initialize a single neuron neural network
neural_network = NeuralNetwork()
print("Random starting synaptic weights:")
print(neural_network.synaptic_weights)
# The training set. We have 4 examples, each consisting of 3 input values
# and 1 output value.
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]])
# Train the neural network using a training set
# Do it 10,000 times and make small adjustments each time
neural_network.train(training_set_inputs, training_set_outputs, 10000)
print("New Synaptic weights after training:")
print(neural_network.synaptic_weights)
# Test the neural net with a new situation
print("Considering new situation [1, 0, 0] -> ?:")
print(neural_network.think(array([[1, 0, 0]])))
Even after copying and pasting the same code that worked in Siraj's episode, I'm still getting the same error.
I just started out look into artificial intelligence, and don't understand what the error means. Could someone please explain what it means and how to fix it? Thanks!
Change self.synaptic_weights += adjustment
to
self.synaptic_weights = self.synaptic_weights + adjustment
self.synaptic_weights
must have a shape of (3,1) and adjustment
must have a shape of (3,4). While the shapes are broadcastable numpy must not like trying to assign the result with shape (3,4) to an array of shape (3,1)
a = np.ones((3,1))
b = np.random.randint(1,10, (3,4))
>>> a
array([[1],
[1],
[1]])
>>> b
array([[8, 2, 5, 7],
[2, 5, 4, 8],
[7, 7, 6, 6]])
>>> a + b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> b += a
>>> b
array([[9, 3, 6, 8],
[3, 6, 5, 9],
[8, 8, 7, 7]])
>>> a
array([[1],
[1],
[1]])
>>> a += b
Traceback (most recent call last):
File "<pyshell#24>", line 1, in <module>
a += b
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
The same error occurs when using numpy.add and specifying a
as the output array
>>> np.add(a,b, out = a)
Traceback (most recent call last):
File "<pyshell#31>", line 1, in <module>
np.add(a,b, out = a)
ValueError: non-broadcastable output operand with shape (3,1) doesn't match the broadcast shape (3,4)
>>>
A new a
needs to be created
>>> a = a + b
>>> a
array([[10, 4, 7, 9],
[ 4, 7, 6, 10],
[ 9, 9, 8, 8]])
>>>
这篇关于ValueError:具有形状 (3,1) 的不可广播输出操作数与广播形状 (3,4) 不匹配的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!