在 Keras 中训练神经网络的零精度 [英] Zero accuracy training a neural network in Keras

查看:98
本文介绍了在 Keras 中训练神经网络的零精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在 Keras 中训练了一个回归问题的神经网络.为什么输出只有一维,每个Epoch的accuracy总是显示acc:0.0000e+00?

I train a Neural Network of Regression Problem in Keras. Why the output is only one Dimension, the accuracy in each Epoch always show acc: 0.0000e+00?

1000/199873 [................................] - ETA:5s - 损失:0.0057 - acc:0.0000e+00

1000/199873 [..............................] - ETA: 5s - loss: 0.0057 - acc: 0.0000e+00

2000/199873 [......................................] - ETA:4s - 损失:0.0058 - acc:0.0000e+00

2000/199873 [..............................] - ETA: 4s - loss: 0.0058 - acc: 0.0000e+00

3000/199873 [......................] - ETA:3s - 损失:0.0057 - acc:0.0000e+00

3000/199873 [..............................] - ETA: 3s - loss: 0.0057 - acc: 0.0000e+00

4000/199873 [......................................] - ETA:3s - 损失:0.0060 - acc:0.0000e+00...

4000/199873 [..............................] - ETA: 3s - loss: 0.0060 - acc: 0.0000e+00 ...

198000/199873 [============================>.] - ETA:0s - 损失:0.0055 - acc:0.0000e+00

198000/199873 [============================>.] - ETA: 0s - loss: 0.0055 - acc: 0.0000e+00

199000/199873 [============================>.] - ETA:0s - 损失:0.0055 - acc:0.0000e+00

199000/199873 [============================>.] - ETA: 0s - loss: 0.0055 - acc: 0.0000e+00

199873/199873 [==============================] - 4s - 损失:0.0055 - acc:0.0000e+00 - val_loss: 0.0180 - val_acc: 0.0000e+00

199873/199873 [==============================] - 4s - loss: 0.0055 - acc: 0.0000e+00 - val_loss: 0.0180 - val_acc: 0.0000e+00

但如果输出是二维或以上,精度没问题.

But if the output are two Dimension or above, no problem for accuracy.

我的模型如下:`

input_dim = 14
batch_size = 1000
nb_epoch = 50
lrelu = LeakyReLU(alpha = 0.1)

model = Sequential()
model.add(Dense(126, input_dim=input_dim)) #Dense(output_dim(also hidden wight), input_dim = input_dim)
model.add(lrelu) #Activation

model.add(Dense(252))
model.add(lrelu)
model.add(Dense(1))
model.add(Activation('linear'))

model.compile(loss= 'mean_squared_error', optimizer='Adam', metrics=['accuracy'])
model.summary()
history = model.fit(X_train_1, y_train_1[:,0:1],
                    batch_size=batch_size,
                    nb_epoch=nb_epoch,
                    verbose=1,
                    validation_split=0.2)

loss = history.history.get('loss')
acc = history.history.get('acc')
val_loss = history.history.get('val_loss')
val_acc = history.history.get('val_acc')

'''saving model'''
from keras.models import load_model
model.save('XXXXX')
del model

'''loading model'''
model = load_model('XXXXX')

'''prediction'''
pred = model.predict(X_train_1, batch_size, verbose=1)
ans = [np.argmax(r) for r in y_train_1[:,0:1]]

推荐答案

问题在于您的最终模型输出具有线性激活,使模型成为回归问题,而不是分类问题.当模型根据类别正确分类数据时定义了准确度",但由于其连续性,准确度"实际上并未为回归问题定义.

The problem is that your final model output has a linear activation, making the model a regression, not a classification problem. "Accuracy" is defined when the model classifies data correctly according to class, but "accuracy" is effectively not defined for a regression problem, due to its continuous property.

使用loss='categorical_crossentropy'activation='softmax'.

这与您的问题类似:链接

更多信息参见:StackExchange

这篇关于在 Keras 中训练神经网络的零精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆