Keras模型似乎不起作用 [英] Keras model doesn't seem to work

查看:184
本文介绍了Keras模型似乎不起作用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下keras模型,当我训练模型时,似乎没有从中学习的经验.我四处询问,并得到不同的建议,例如权重未正确初始化或反向传播未发生.该模型是:

I have the following keras model and when I train the model, it doesn't seem to learn from it. I asked around and got different suggestions like weights are not initialised properly or back-propogation is not happening. The model is:

model.add(Conv2D(32, (3, 3), kernel_initializer='random_uniform', activation='relu', input_shape=(x1, x2, depth)))
model.add(MaxPool2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))

model.add(Flatten())

model.add(Dense(128, activation='relu'))

model.add(Dense(3, activation='softmax'))

我什至查看了解决方案,但我似乎没有做到这一点.我最后有softmax.供您参考,我提供了培训过程的输出:

I even looked at this solution but I don't seem to have done that. I have softmax in the end. For your reference, I have the output of the training process:

Epoch 1/10
283/283 [==============================] - 1s 2ms/step - loss: 5.1041 - acc: 0.6254 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 2/10
283/283 [==============================] - 0s 696us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 3/10
283/283 [==============================] - 0s 717us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 4/10
283/283 [==============================] - 0s 692us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 5/10
283/283 [==============================] - 0s 701us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 6/10
283/283 [==============================] - 0s 711us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 7/10
283/283 [==============================] - 0s 707us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 8/10
283/283 [==============================] - 0s 708us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 9/10
283/283 [==============================] - 0s 703us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc: 0.4375
Epoch 10/10
283/283 [==============================] - 0s 716us/step - loss: 4.9550 - acc: 0.6926 - val_loss: 9.0664 - val_acc

这是我的编译方式:

sgd = optimizers.SGD(lr=0.001, decay=1e-4, momentum=0.05, nesterov=True)

model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

有什么建议吗?我缺少什么?我已经正确初始化了砝码,而喀拉拉邦似乎要照顾好反向支撑.我想念什么?

Any suggestions? Something I'm missing? I have properly initialised the weights and keras seems to take care of backprop. What am I missing?

推荐答案

我找到了解决方案.我必须对图像进行归一化/缩放以进行适当的训练.现在训练正确.这是链接对我有帮助.

I found the solution. I had to normalise/scale the images for proper training. It's now training properly. Here's the link that helped me with it.

这篇关于Keras模型似乎不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆