精确模型在keras-tf上收敛但在keras上不收敛 [英] Exact model converging on keras-tf but not on keras

查看:72
本文介绍了精确模型在keras-tf上收敛但在keras上不收敛的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在预测 EWMA(指数加权移动平均线)公式使用简单RNN的时间序列.已在此处发布.

I am working on predicting the EWMA (exponential weighted moving average) formula on a time series using a simple RNN. Already posted about it here.

虽然使用keras-tf(来自tensorflow导入keras)完美地收敛了模型,但使用本地keras(导入keras)无法使用完全相同的代码.

While the model converges beautifully using keras-tf (from tensorflow import keras), the exact same code doesn't work using native keras (import keras).

融合模型代码(keras-tf):

Converging model code (keras-tf):

from tensorflow import keras
import numpy as np

np.random.seed(1337)  # for reproducibility

def run_avg(signal, alpha=0.2):
    avg_signal = []
    avg = np.mean(signal)
    for i, sample in enumerate(signal):
        if np.isnan(sample) or sample == 0:
            sample = avg
        avg = (1 - alpha) * avg + alpha * sample
        avg_signal.append(avg)
    return np.array(avg_signal)

def train():
    x = np.random.rand(3000)
    y = run_avg(x)
    x = np.reshape(x, (-1, 1, 1))
    y = np.reshape(y, (-1, 1))

    input_layer = keras.layers.Input(batch_shape=(1, 1, 1), dtype='float32')
    rnn_layer = keras.layers.SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1')(input_layer)
    model = keras.Model(inputs=input_layer, outputs=rnn_layer)

    model.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mse')
    model.summary()

    print(model.get_layer('rnn_layer_1').get_weights())
    model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
    print(model.get_layer('rnn_layer_1').get_weights())

train()

非收敛模型代码:

from keras import Model
from keras.layers import SimpleRNN, Input
from keras.optimizers import SGD

import numpy as np

np.random.seed(1337)  # for reproducibility

def run_avg(signal, alpha=0.2):
    avg_signal = []
    avg = np.mean(signal)
    for i, sample in enumerate(signal):
        if np.isnan(sample) or sample == 0:
            sample = avg
        avg = (1 - alpha) * avg + alpha * sample
        avg_signal.append(avg)
    return np.array(avg_signal)

def train():
    x = np.random.rand(3000)
    y = run_avg(x)
    x = np.reshape(x, (-1, 1, 1))
    y = np.reshape(y, (-1, 1))

    input_layer = Input(batch_shape=(1, 1, 1), dtype='float32')
    rnn_layer = SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1')(input_layer)
    model = Model(inputs=input_layer, outputs=rnn_layer)


    model.compile(optimizer=SGD(lr=0.1), loss='mse')
    model.summary()

    print(model.get_layer('rnn_layer_1').get_weights())
    model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
    print(model.get_layer('rnn_layer_1').get_weights())

train()

在tf-keras收敛模型中,损耗最小化,权重近似为EWMA公式;在非收敛模型中,损耗爆炸至nan.据我所知,唯一的区别是导入类的方式.

While in the tf-keras converging model, the loss minimizes and weights approximate nicely the EWMA formula, in the non-converging model, the loss explodes to nan. The only difference as far as I can tell is the way I import the classes.

我为两个实现使用了相同的随机种子.我正在使用Windows PC,带有keras 2.2.4和tensorflow版本1.13.1(其中包括版本2.2.4-tf中的keras)的Anaconda环境进行工作.

I used the same random seed for both implementations. I am working on a Windows pc, Anaconda environment with keras 2.2.4 and tensorflow version 1.13.1 (which includes keras in version 2.2.4-tf).

对此有何见解?

推荐答案

这可能是由于下面提到的Line在TF Keras中实现,而在Keras中不实现.

The Line mentioned below is implemented in TF Keras and is not implemented in Keras.

self.input_spec = [InputSpec(ndim=3)]

这种差异的一种情况就是您上面提到的情况.

One case of this difference is that mentioned by you above.

我想使用Keras的Sequential类演示类似的情况.

I want to demonstrate similar case, using Sequential class of Keras.

下面的代码对TF Keras很好用:

Below code works fine for TF Keras:

from tensorflow import keras
import numpy as np
from tensorflow.keras.models import Sequential as Sequential

np.random.seed(1337)  # for reproducibility

def run_avg(signal, alpha=0.2):
    avg_signal = []
    avg = np.mean(signal)
    for i, sample in enumerate(signal):
        if np.isnan(sample) or sample == 0:
            sample = avg
        avg = (1 - alpha) * avg + alpha * sample
        avg_signal.append(avg)
    return np.array(avg_signal)

def train():
    x = np.random.rand(3000)
    y = run_avg(x)
    x = np.reshape(x, (-1, 1, 1))
    y = np.reshape(y, (-1, 1))
    
    # SimpleRNN model
    model = Sequential()
    model.add(keras.layers.Input(batch_shape=(1, 1, 1), dtype='float32'))
    model.add(keras.layers.SimpleRNN(1, stateful=True, activation=None, name='rnn_layer_1'))
    model.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mse')
    model.summary()
    
    print(model.get_layer('rnn_layer_1').get_weights())
    model.fit(x=x, y=y, batch_size=1, epochs=10, shuffle=False)
    print(model.get_layer('rnn_layer_1').get_weights())

train()

但是,如果我们使用Native Keras运行相同的代码,则会得到如下所示的错误:

But if we run the same using Native Keras, we get the error shown below:

TypeError: The added layer must be an instance of class Layer. Found: Tensor("input_1_1:0", shape=(1, 1, 1), dtype=float32)

如果我们替换下面的代码行

If we replace the below line of code

model.add(Input(batch_shape=(1, 1, 1), dtype='float32'))

使用以下代码,

model.add(Dense(32, batch_input_shape=(1,1,1), dtype='float32'))

即使使用Keras实现的model收敛也几乎类似于TF Keras实现.

even the model with Keras implementation converges almost similar to TF Keras implementation.

在两种情况下,如果您想从代码的角度理解实现的差异,可以参考以下链接:

You can refer the below links if you want to understand the difference in implementation from code perspective, in both the cases:

https ://github.com/tensorflow/tensorflow/blob/r1.14/tensorflow/python/keras/layers/recurrent.py#L1364-L1375

https://github .com/keras-team/keras/blob/master/keras/layers/recurrent.py#L1082-L1091

这篇关于精确模型在keras-tf上收敛但在keras上不收敛的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆