具有CuDNNLSTM层的Keras模型在生产服务器上不起作用 [英] Keras Model With CuDNNLSTM Layers Doesn't Work on Production Server
问题描述
我已使用AWS p3实例使用GPU加速来训练以下模型:
I have used an AWS p3 instance to train the following model using GPU acceleration:
x = CuDNNLSTM(128, return_sequences=True)(inputs)
x = Dropout(0.2)(x)
x = CuDNNLSTM(128, return_sequences=False)(x)
x = Dropout(0.2)(x)
predictions = Dense(1, activation='tanh')(x)
model = Model(inputs=inputs, outputs=predictions)
训练后,我使用Keras的save_model
函数保存了模型并将其移至没有GPU的单独生产服务器.
After training I saved the model with Keras' save_model
function and moved it to a separate production server that doesn't have a GPU.
当我尝试预测在生产服务器上使用模型时,它失败并显示以下错误:
When I attempt to predict using the model on the production server it fails with the following error:
没有注册任何OpKernel支持这些属性的Op'CudnnRNN'.注册设备:[CPU],注册内核:
No OpKernel was registered to support Op 'CudnnRNN' with these attrs. Registered devices: [CPU], Registered kernels:
我猜这是因为生产服务器不支持GPU,但是我希望这不会成为问题.我可以在没有GPU的生产服务器上使用此模型吗?
I'm guessing this is because the production server doesn't have GPU support, but I was hoping this wouldn't be a problem. Is there any way I can use this model on a production server without a GPU?
推荐答案
不,您不能,CuDNN需要使用CUDA GPU.您必须用标准的LSTM层替换您的CuDNNLSTM层.
No, you can't, CuDNN requires the use of a CUDA GPU. You have to replace your CuDNNLSTM layers with standard LSTM ones.
这篇关于具有CuDNNLSTM层的Keras模型在生产服务器上不起作用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!