keras LSTM层花费的时间太长 [英] keras LSTM layer takes too long to train

查看:77
本文介绍了keras LSTM层花费的时间太长的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

每当我在Keras上试用LSTM模型时,由于训练时间长,似乎无法对模型进行训练.

Whenever I try out LSTM models on Keras, it seems that the model is impossible to train due to long training time.

例如,这样的模型每步训练需要80秒.:

For instance, a model like this takes 80 seconds per step to train.:

def create_model(self):
        inputs = {}
        inputs['input'] = []
        lstm = []
        placeholder = {}
        for tf, v in self.env.timeframes.items():
            inputs[tf] = Input(shape = v['shape'], name = tf)
            lstm.append(LSTM(8)(inputs[tf]))
            inputs['input'].append(inputs[tf])
        account = Input(shape = (3,), name = 'account')
        account_ = Dense(8, activation = 'relu')(account)
        dt = Input(shape = (7,), name = 'dt')
        dt_ = Dense(16, activation = 'relu')(dt)
        inputs['input'].extend([account, dt])

        data = Concatenate(axis = 1)(lstm)
        data = Dense(128, activation = 'relu')(data)
        y = Concatenate(axis = 1)([data, account, dt])
        y = Dense(256, activation = 'relu')(y)
        y = Dense(64, activation = 'relu')(y)
        y = Dense(16, activation = 'relu')(y)
        output = Dense(3, activation = 'linear')(y)

        model = Model(inputs = inputs['input'], outputs = output)
        model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae'])
        return model

具有LSTM的Flatten + Dense贴图的模型是这样的:

Whereas model which has LSTM substituded with Flatten + Dense like this:

def create_model(self):
        inputs = {}
        inputs['input'] = []
        lstm = []
        placeholder = {}
        for tf, v in self.env.timeframes.items():
            inputs[tf] = Input(shape = v['shape'], name = tf)
            #lstm.append(LSTM(8)(inputs[tf]))
            placeholder[tf] = Flatten()(inputs[tf])
            lstm.append(Dense(32, activation = 'relu')(placeholder[tf]))
            inputs['input'].append(inputs[tf])
        account = Input(shape = (3,), name = 'account')
        account_ = Dense(8, activation = 'relu')(account)
        dt = Input(shape = (7,), name = 'dt')
        dt_ = Dense(16, activation = 'relu')(dt)
        inputs['input'].extend([account, dt])

        data = Concatenate(axis = 1)(lstm)
        data = Dense(128, activation = 'relu')(data)
        y = Concatenate(axis = 1)([data, account, dt])
        y = Dense(256, activation = 'relu')(y)
        y = Dense(64, activation = 'relu')(y)
        y = Dense(16, activation = 'relu')(y)
        output = Dense(3, activation = 'linear')(y)

        model = Model(inputs = inputs['input'], outputs = output)
        model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae'])
        return model

每步训练需要45-50毫秒.

takes 45-50 ms per step to train.

模型中是否存在引起此问题的错误?还是这个模型运行的速度如此之快?

Is there something wrong in the model that is causing this? Or is this as fast as this model will run?

-self.env.timeframes看起来像这样:9个项的字典

-- self.env.timeframes looks like this: dictionary with 9 items

timeframes = {
            's1': {
                'lookback': 86400,
                'word': '1 s',
                'unit': 1,
                'offset': 12
                },
            's5': {
                'lookback': 200,
                'word': '5 s',
                'unit': 5,
                'offset': 2
                },
            'm1': {
                'lookback': 100,
                'word': '1 min',
                'unit': 60,
                'offset': 0
                },
            'm5': {
                'lookback': 100,
                'word': '5 min',
                'unit': 300,
                'offset': 0
                },
            'm30': {
                'lookback': 100,
                'word': '30 min',
                'unit': 1800,
                'offset': 0
                },
            'h1': {
                'lookback': 200,
                'word': '1 h',
                'unit': 3600,
                'offset': 0
                },
            'h4': {
                'lookback': 200,
                'word': '4 h',
                'unit': 14400,
                'offset': 0
                },
            'h12': {
                'lookback': 100,
                'word': '12 h',
                'unit': 43200,
                'offset': 0
                },
            'd1': {
                'lookback': 200,
                'word': '1 d',
                'unit': 86400,
                'offset': 0
                }
            }

提示符中的GPU信息-

GPU info from prompt -

2018-06-30 07:35:16.204320: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-06-30 07:35:16.495832: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.86
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.59GiB
2018-06-30 07:35:16.495981: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-06-30 07:35:16.956743: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-06-30 07:35:16.956827: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929]      0
2018-06-30 07:35:16.957540: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0:   N
2018-06-30 07:35:16.957865: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6370 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)

推荐答案

如果您使用的是GPU,请将所有LSTM层替换为CuDNNLSTM层.您可以从keras.layers:

If you are using GPU please replace all LSTM layers with CuDNNLSTM layers. You can import it from keras.layers:

from keras.layers import  CuDNNLSTM

def create_model(self):
    inputs = {}
    inputs['input'] = []
    lstm = []
    placeholder = {}
    for tf, v in self.env.timeframes.items():
        inputs[tf] = Input(shape = v['shape'], name = tf)
        lstm.append(CuDNNLSTM(8)(inputs[tf]))
        inputs['input'].append(inputs[tf])
    account = Input(shape = (3,), name = 'account')
    account_ = Dense(8, activation = 'relu')(account)
    dt = Input(shape = (7,), name = 'dt')
    dt_ = Dense(16, activation = 'relu')(dt)
    inputs['input'].extend([account, dt])

    data = Concatenate(axis = 1)(lstm)
    data = Dense(128, activation = 'relu')(data)
    y = Concatenate(axis = 1)([data, account, dt])
    y = Dense(256, activation = 'relu')(y)
    y = Dense(64, activation = 'relu')(y)
    y = Dense(16, activation = 'relu')(y)
    output = Dense(3, activation = 'linear')(y)

    model = Model(inputs = inputs['input'], outputs = output)
    model.compile(loss = 'mse', optimizer = 'adam', metrics = ['mae'])
    return model

这里有更多信息: https://keras.io/layers/recurrent/#cudnnlstm

这将大大加快模型=)

这篇关于keras LSTM层花费的时间太长的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆