如何在Keras中使用LSTM的多个输入? [英] How to work with multiple inputs for LSTM in Keras?

查看:123
本文介绍了如何在Keras中使用LSTM的多个输入?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试预测人口的用水量.

I'm trying to predict the water usage of a population.

我有1个主要输入:

  • 水量

和2个辅助输入:

  • 温度
  • 降雨

理论上,它们与供水有关.

In theory they have a relation with the water supply.

必须说每个降雨量和温度数据都与水量相对应.所以这是一个时间序列问题.

It must be said that each rainfall and temperature data correspond with the water volume. So this is a time series problem.

问题是,我不知道如何仅使用一个.csv文件中的3个输入,该文件有3列,每行每个输入,如下代码所示.当我只有一个输入(例如水量)时,使用此代码或多或少都可以使网络正常工作,但是当我输入多个代码时,网络就不行了. (因此,如果您使用下面的csv文件运行此代码,它将显示尺寸错误.)

The problem is that I don't know how to use 3 inputs from just one .csv file, with 3 columns, each one for each input, as the code below is made. When I have just one input (e.g.water volume) the network works more or less good with this code, but not when I have more than one. (So if you run this code with the csv file below, it will show a dimension error).

阅读以下内容的答案:

  • Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras
  • Time Series Forecast Case Study with Python: Annual Water Usage in Baltimore

似乎很多人有同样的问题.

it seems to be that many people have the same problem.

代码:

编辑:代码已更新

import numpy
import matplotlib.pyplot as plt
import pandas
import math

from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error


# convert an array of values into a dataset matrix

def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - look_back - 1):
        a = dataset[i:(i + look_back), 0]
        dataX.append(a)
        dataY.append(dataset[i + look_back, 2])
    return numpy.array(dataX), numpy.array(dataY)



# fix random seed for reproducibility
numpy.random.seed(7)


# load the dataset
dataframe = pandas.read_csv('datos.csv', engine='python') 
dataset = dataframe.values

# normalize the dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)

# split into train and test sets
train_size = int(len(dataset) * 0.67) 
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]

# reshape into X=t and Y=t+1
look_back = 3
trainX, trainY = create_dataset(train, look_back)  
testX, testY = create_dataset(test, look_back)

# reshape input to be  [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 3))
testX = numpy.reshape(testX, (testX.shape[0],look_back, 3))

# create and fit the LSTM network

model = Sequential()
model.add(LSTM(4, input_dim=look_back))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history= model.fit(trainX, trainY,validation_split=0.33, nb_epoch=200, batch_size=32)

# Plot training
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('pérdida')
plt.xlabel('época')
plt.legend(['entrenamiento', 'validación'], loc='upper right')
plt.show()

# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)

# Get something which has as many features as dataset
trainPredict_extended = numpy.zeros((len(trainPredict),3))
# Put the predictions there
trainPredict_extended[:,2] = trainPredict[:,0]
# Inverse transform it and select the 3rd column.
trainPredict = scaler.inverse_transform(trainPredict_extended) [:,2]  
print(trainPredict)
# Get something which has as many features as dataset
testPredict_extended = numpy.zeros((len(testPredict),3))
# Put the predictions there
testPredict_extended[:,2] = testPredict[:,0]
# Inverse transform it and select the 3rd column.
testPredict = scaler.inverse_transform(testPredict_extended)[:,2]   


trainY_extended = numpy.zeros((len(trainY),3))
trainY_extended[:,2]=trainY
trainY=scaler.inverse_transform(trainY_extended)[:,2]


testY_extended = numpy.zeros((len(testY),3))
testY_extended[:,2]=testY
testY=scaler.inverse_transform(testY_extended)[:,2]


# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(trainY, trainPredict))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY, testPredict))
print('Test Score: %.2f RMSE' % (testScore))

# shift train predictions for plotting
trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, 2] = trainPredict

# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, 2] = testPredict



#plot

 serie,=plt.plot(scaler.inverse_transform(dataset)[:,2])  
prediccion_entrenamiento,=plt.plot(trainPredictPlot[:,2],linestyle='--')  
prediccion_test,=plt.plot(testPredictPlot[:,2],linestyle='--')
plt.title('Consumo de agua')
plt.ylabel('cosumo (m3)')
plt.xlabel('dia')
plt.legend([serie,prediccion_entrenamiento,prediccion_test],['serie','entrenamiento','test'], loc='upper right')

这是我创建的csv文件,如果有帮助的话.

This is the csv file I have created, if it helps.

datos.csv

更改代码后,我修复了所有错误,但是我不确定结果如何.这是预测图中的放大图:

After changing the code, I fixed all the errors, but I'm not really sure about the results. This is a zoom in the prediction plot:

表明预测的值和实际值中存在位移".当实时序列中存在最大值时,预测中同时存在一个最小值,但似乎与上一个时间步长相对应.

which shows that there is a "displacement" in the values predicted and in the real ones. When there is a max in the real time series, there is a min in the forecast for the same time, but it seems like it corresponds to the previous time step.

推荐答案

更改

a = dataset[i:(i + look_back), 0]

收件人

a = dataset[i:(i + look_back), :]

如果要在训练数据中包含3个功能.

If you want the 3 features in your training data.

然后使用

model.add(LSTM(4, input_shape=(look_back,3)))

要指定您的序列中有look_back个时间步长,每个时间步长都具有3个功能.

To specify that you have look_back time steps in your sequence, each with 3 features.

它应该运行

实际上,sklearn.preprocessing.MinMaxScaler()的功能:inverse_transform()接受的输入与您安装的对象具有相同的形状.因此,您需要执行以下操作:

Indeed, sklearn.preprocessing.MinMaxScaler()'s function : inverse_transform() takes an input which has the same shape as the object you fitted. So you need to do something like this :

# Get something which has as many features as dataset
trainPredict_extended = np.zeros((len(trainPredict),3))
# Put the predictions there
trainPredict_extended[:,2] = trainPredict
# Inverse transform it and select the 3rd column.
trainPredict = scaler.inverse_transform(trainPredict_extended)[:,2]

我想您的代码中还会遇到类似以下的其他问题,但您无法解决的所有问题都没有解决:) ML部分已修复,您知道错误的出处.只需检查对象的形状,然后尝试使其匹配即可.

I guess you will have other issues like this below in your code but nothing that you can't fix :) the ML part is fixed and you know where the error comes from. Just check the shapes of your objects and try to make them match.

这篇关于如何在Keras中使用LSTM的多个输入?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆