Keras保存模型问题 [英] Keras save model issue

查看:96
本文介绍了Keras保存模型问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个可变自动编码器网络,我必须定义一种采样方法来生成潜伏z,我认为这可能是有问题的.这个py文件正在训练中,另一个py文件正在进行在线预测,因此我需要保存keras模型,保存模型没有任何问题,但是当我从"h5"文件加载模型时,它显示了错误:

This is a variational autoencoder network, I have to define a sampling method to generate latent z, I thinks it might be something wrong with this. This py file is doing training, the other py file is doing predicting online, so I need to save the keras model, there is nothing wrong with saving model, but when I load model from 'h5' file, it shows an error:

NameError: name 'latent_dim' is not defined

以下是代码:

df_test = df[df['label']==cluster_num].iloc[:,:data_num.shape[1]]

data_scale_ = preprocessing.StandardScaler().fit(df_test.values)

data_num_ = data_scale.transform(df_test.values)

models_deep_learning_scaler.append(data_scale_)

batch_size = data_num_.shape[0]//10

original_dim = data_num_.shape[1]

latent_dim = data_num_.shape[1]*2

intermediate_dim = data_num_.shape[1]*10

nb_epoch = 1

epsilon_std = 0.001



x = Input(shape=(original_dim,))

init_drop = Dropout(0.2, input_shape=(original_dim,))(x)

h = Dense(intermediate_dim, activation='relu')(init_drop)

z_mean = Dense(latent_dim)(h)

z_log_var = Dense(latent_dim)(h)





def sampling(args):

    z_mean, z_log_var = args

    epsilon = K.random_normal(shape=(latent_dim,), mean=0.,

                              std=epsilon_std)

    return z_mean + K.exp(z_log_var / 2) * epsilon



# note that "output_shape" isn't necessary with the TensorFlow backend

z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])



# we instantiate these layers separately so as to reuse them later



decoder_h = Dense(intermediate_dim, activation='relu')

decoder_mean = Dense(original_dim, activation='linear')

h_decoded = decoder_h(z)

x_decoded_mean = decoder_mean(h_decoded)





def vae_loss(x, x_decoded_mean):

    xent_loss = original_dim * objectives.mae(x, x_decoded_mean)

    kl_loss = - 0.5 * K.sum(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)

    return xent_loss + kl_loss



vae = Model(x, x_decoded_mean)

vae.compile(optimizer=Adam(lr=0.01), loss=vae_loss)



train_ratio = 0.95

train_num = int(data_num_.shape[0]*train_ratio)



x_train = data_num_[:train_num,:]

x_test = data_num_[train_num:,:]



vae.fit(x_train, x_train,

        shuffle=True,

        nb_epoch=nb_epoch,

        batch_size=batch_size,

        validation_data=(x_test, x_test))

vae.save('./models/deep_learning_'+str(cluster_num)+'.h5')

del vae

from keras.models import load_model
vae = load_model('./models/deep_learning_'+str(cluster_num)+'.h5')

它显示错误: NameError: name 'latent_dim' is not defined

推荐答案

对于变分损失,您使用的是Keras模块未知的许多变量.您需要通过load_model函数的custom_objects参数传递它们.

For variational loss you are using many variable not known by Keras module. You need to pass them through custom_objects param of load_model function.

在您的情况下:

vae.save('./vae_'+str(cluster_num)+'.h5')
vae.summary()

del vae

from keras.models import load_model
vae = load_model('./vae_'+str(cluster_num)+'.h5', custom_objects={'latent_dim': latent_dim, 'epsilon_std': epsilon_std, 'vae_loss': vae_loss})
vae.summary()

这篇关于Keras保存模型问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆