神经网络为每次执行提供不同的结果 [英] Neural network gives different results for each execution

查看:395
本文介绍了神经网络为每次执行提供不同的结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是我使用Keras和TensorFlow作为后端运行的确切代码.对于具有相同程序的每次运行,训练结果都不同.有时在第400次迭代中获得100%的精度,而在第200次迭代中有时获得100%的精度.

This is the exact code I'm running with Keras and TensorFlow as a back end. For each run with the same program, the training results are different. Some times it gets 100% accuracy in 400th iteration and some times in the 200th.

training_data = np.array([[0,0],[0,1],[1,0],[1,1]], "float32")
target_data = np.array([[0],[1],[1],[0]], "float32")

model = Sequential()
model.add(Dense(4, input_dim=2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.compile(loss='mean_squared_error',
              optimizer='adam',
              metrics=['binary_accuracy'])

model.fit(training_data, target_data, epochs=500, verbose=2)


Epoch 403/500
0s - loss: 0.2256 - binary_accuracy: 0.7500

那么,为什么每次训练的结果都随着火车数据的固定而改变呢?将不胜感激一些解释.

So why does the result change in each execution as the train data is fixed ? Would greatly appreciate some explanation.

推荐答案

训练集是固定的,但是我们将神经网络的初始权重设置为较小范围内的随机值,因此每次训练网络时,得到略有不同的结果.

The training set is fixed, but we set the initial weights of the neural network to a random value in a small range, so each time you train the network you get slightly different results.

如果要获得可重现的结果,可以将numpy.random.seed的numpy随机种子设置为固定值,因此将使用相同的权重,但是请注意,这可能会使您的网络产生偏差.

If you want reproducible results you can set the numpy random seed with numpy.random.seed to a fixed value, so the same weights will be used, but beware that this can bias your network.

这篇关于神经网络为每次执行提供不同的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆