Keras LSTM精度过高 [英] Keras LSTM Accuracy too high

查看:272
本文介绍了Keras LSTM精度过高的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我试图在Keras中使用LSTM,但是即使在第一个时期之后,准确性似乎仍然过高(90%),而且我担心训练不正确,我从这篇文章中得到了一些建议:

Im trying to get a LSTM working in Keras but even after the first epoch, the accuracy seems to be too high (90%) and Im worried is not training properly, I took some ideas from this post:

https://machinelearningmastery.com/text-一代lstm-recurrent-neural-networks-python-keras/

这是我的代码:

import numpy
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.preprocessing.sequence import pad_sequences
from pandas import read_csv
import simplejson

numpy.random.seed(7)

dataset = read_csv("mydataset.csv", delimiter=",", quotechar='"').values

char_to_int = dict((c, i) for i, c in enumerate(dataset[:,1]))
int_to_char = dict((i, c) for i, c in enumerate(dataset[:,1]))

f = open('char_to_int_v2.txt', 'w')
simplejson.dump(char_to_int, f)
f.close()

f = open('int_to_char_v2.txt', 'w')
simplejson.dump(int_to_char, f)
f.close()

seq_length = 1

max_len = 5

dataX = []
dataY = []

for i in range(0, len(dataset) - seq_length, 1):
    start = numpy.random.randint(len(dataset)-2)
    end = numpy.random.randint(start, min(start+max_len,len(dataset)-1))
    sequence_in = dataset[start:end+1]
    sequence_out = dataset[end + 1]
    dataX.append([[char[0], char_to_int[char[1]], char[2]] for char in sequence_in])
    dataY.append([sequence_out[0], char_to_int[sequence_out[1]], sequence_out[2]])

X = pad_sequences(dataX, maxlen=max_len, dtype='float32')
X = numpy.reshape(X, (X.shape[0], max_len, 3))

y = numpy.reshape(dataY, (X.shape[0], 3))

batch_size = 1

model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

n_epoch = 1

for i in range(n_epoch):
    model.fit(X, y, epochs=1, batch_size=batch_size, verbose=1, shuffle=False)
    model.reset_states()

model.save_weights("weights.h5")
model.save('model.h5')
with open('model-params.json', 'w') as f:
    f.write(model.to_json())

scores = model.evaluate(X, y, verbose=0)
print("Model Accuracy: %.2f%%" % (scores[1]*100))

这是我的数据集的样子:

Here's what my dataset looks like:

"time_date","name","user_id"
1402,"Sugar",3012
1402,"Milk",3012
1802,"Tomatoes",3012
1802,"Cucumber",3012
etc...

据我了解,我的dataX的形状为(n_samples,5,3),因为我将零填充到序列的左侧,因此如果我将前3个结果构建为(第二列基于char_to_int函数,因此我将一个随机数作为示例):

From what I understand, my dataX will have a shape of (n_samples, 5, 3) because I'm padding zeroes to the left of my sequence, so if I build the first 3 results into something it will be (the second columns are based on char_to_int func so Im putting a random number as examples):

[[0, 0, 0], [0, 0, 0], [0, 0, 0], [1402, 5323, 3012], [1402, 5324, 3012]]

我对此的数据是:

[[1802, 3212, 3012]]

那是正确的吗?如果是这样,肯定有其他事情一定是错误的,因为这是1个历元之后的输出:

Is that correct? If so, something else must be definitely wrong because this is the output after 1 epoch:

9700/9700 [==============================] - 31s - loss: 10405.0951 - acc: 0.8544
Model Accuracy: 87.49%

我觉得我几乎可以使用该模型了,但是我错过了一些重要的东西,我不知道它是什么,对此我将不胜感激.谢谢.

I feel like I'm almost there with this model but I'm missing something important and I don't know what it is, I will appreciate any guidance on this. Thanks.

推荐答案

由于我使用了categorical_crossentropy损失,因此我似乎误解了如何对数据进行整形,因此我不得不用to_categorical对我的dataY进行一次热编码,效果很好.但是,当尝试训练大型数据集时,我得到了非常著名的MemoryError.谢谢djk47463.

It seems I misinterpreted how to shape my data, since Im using a categorical_crossentropy loss, I had to one-hot encode my dataY with to_categorical which worked perfectly. However, when trying to train large datasets I got the very famous MemoryError. Thanks djk47463.

这篇关于Keras LSTM精度过高的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆