Keras LSTM预测的时间序列被压缩并移位 [英] Keras LSTM predicted timeseries squashed and shifted

查看:191
本文介绍了Keras LSTM预测的时间序列被压缩并移位的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在假期中获得有关Keras的经验,我想我将从股票数据的时间序列预测的教科书示例开始.因此,我要执行的操作是给出最近48小时的平均价格变化(自上次以来的百分比),并预测接下来一个小时的平均价格变化是什么.

I'm trying to get some hands on experience with Keras during the holidays, and I thought I'd start out with the textbook example of timeseries prediction on stock data. So what I'm trying to do is given the last 48 hours worth of average price changes (percent since previous), predict what the average price chanege of the coming hour is.

但是,当根据测试集(甚至训练集)进行验证时,预测序列的幅度会逐渐减小,有时会偏移为始终为正或始终为负,即偏离0%的变化,对于这种事情,我认为是正确的.

However, when verifying against the test set (or even the training set) the amplitude of the predicted series is way off, and sometimes is shifted to be either always positive or always negative, i.e., shifted away from the 0% change, which I think would be correct for this kind of thing.

我想出了以下最小示例来说明问题:

I came up with the following minimal example to show the issue:

df = pandas.DataFrame.from_csv('test-data-01.csv', header=0)
df['pct'] = df.value.pct_change(periods=1)

seq_len=48
vals = df.pct.values[1:] # First pct change is NaN, skip it
sequences = []
for i in range(0, len(vals) - seq_len):
    sx = vals[i:i+seq_len].reshape(seq_len, 1)
    sy = vals[i+seq_len]
    sequences.append((sx, sy))

row = -24
trainSeqs = sequences[:row]
testSeqs = sequences[row:]

trainX = np.array([i[0] for i in trainSeqs])
trainy = np.array([i[1] for i in trainSeqs])

model = Sequential()
model.add(LSTM(25, batch_input_shape=(1, seq_len, 1)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
model.fit(trainX, trainy, epochs=1, batch_size=1, verbose=1, shuffle=True)

pred = []
for s in trainSeqs:
    pred.append(model.predict(s[0].reshape(1, seq_len, 1)))
pred = np.array(pred).flatten()

plot(pred)
plot([i[1] for i in trainSeqs])
axis([2500, 2550,-0.03, 0.03])

如您所见,我选择了最后48小时,然后将其下一步进入元组,然后前进1小时,重复此过程,从而创建了训练和测试序列.该模型是一个非常简单的1 LSTM和1个致密层.

As you can see, I create training and testing sequences, by selecting the last 48 hours, and the next step into a tuple, and then advancing 1 hour, repeating the procedure. The model is a very simple 1 LSTM and 1 dense layer.

我本来希望各个预测点的图与训练序列的图非常好地重叠(毕竟这是它们在其上训练的相同集合),并且测试序列的匹配程度也很高.但是,我在培训数据上得到了以下结果:

I would have expected the plot of individual predicted points to overlap pretty nicely the plot of training sequences (after all this is the same set they were trained on), and sort of match for the test sequences. However I get the following result on training data:

  • 橙色:真实数据
  • 蓝色:预测数据

有什么想法吗?我误会了吗?

Any idea what might be going on? Did I misunderstand something?

更新:为了更好地显示我所说的移位和压缩的含义,我还通过将预测值移回以匹配实际数据并乘以匹配振幅来绘制预测值.

Update: to better show what I mean by shifted and squashed I also plotted the predicted values by shifting it back to match the real data and multiplied to match the amplitude.

plot(pred*12-0.03)
plot([i[1] for i in trainSeqs])
axis([2500, 2550,-0.03, 0.03])

如您所见,预测非常适合真实数据,只是以某种方式被压缩和偏移,我不知道为什么.

As you can see the prediction nicely fits the real data, it's just squashed and offset somehow, and I can't figure out why.

推荐答案

我认为您过拟合了,因为数据的维数为1,而具有25个单位的LSTM看起来很复杂低维数据集.这是我会尝试的事情列表:

I presume you are overfitting, since the dimensionality of your data is 1, and a LSTM with 25 units seems rather complex for such a low-dimensional dataset. Here's a list of things that I would try:

  • 减小LSTM尺寸.
  • 添加某种形式的正则化以对抗过度拟合.例如,辍学可能是一个不错的选择.
  • 进行更多时期的培训或更改学习率.该模型可能需要更多的时期或更大的更新才能找到合适的参数.
  • Decreasing the LSTM dimension.
  • Adding some form of regularization to combat overfitting. For example, dropout might be a good choice.
  • Training for more epochs or changing the learning rate. The model might need more epochs or bigger updates to find the appropriate parameters.

更新.让我总结一下我们在评论部分中讨论的内容.

UPDATE. Let me summarize what we discussed in the comments section.

为澄清起见,第一张图未显示验证集的预测序列,但显示了训练集.因此,我的第一个过度拟合解释可能不准确.我认为应该提出一个适当的问题:实际上是否有可能从这样一个低维数据集中预测未来价格的变化?机器学习算法并不是神奇的:只有在存在数据的情况下,它们才会在数据中找到模式.

Just for clarification, the first plot doesn't show the predicted series for a validation set, but for the training set. Therefore, my first overfitting interpretation might be inaccurate. I think an appropriate question to ask would be: is it actually possible to predict the future price change from such a low-dimensional dataset? Machine learning algorithms aren't magical: they'll find patterns in the data only if they exist.

如果仅过去价格变化确实不能很好地说明未来价格变化,那么:

If the past price change alone is indeed not very informative of the future price change then:

  • 您的模型将学习预测价格变化的平均值(可能约为0),因为在没有信息功能的情况下,这是产生最低损失的值.
  • 由于时间步t + 1处的价格变化与时间步t处的价格变化略有相关,因此预测可能看起来有些偏移"(但是,最接近0的预测是最安全的选择).作为专家,我确​​实可以观察到唯一的模式(即,时间步t + 1处的值有时类似于时间步t处的值).

如果通常在时间步t和t + 1处的值之间的相关性更高,那么我认为该模型将对该相关性更有信心,并且预测幅度将更大.

If values at timesteps t and t+1 happened to be more correlated in general, then I presume that the model would be more confident about this correlation and the amplitude of the prediction would be bigger.

这篇关于Keras LSTM预测的时间序列被压缩并移位的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆