为什么TF Keras推理比Numpy运算慢? [英] Why is TF Keras inference way slower than Numpy operations?
本文介绍了为什么TF Keras推理比Numpy运算慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在研究使用Keras和Tensorflow实施的强化学习模型.我必须经常在单个输入上调用model.predict().
I'm working on a reinforcement learning model implemented with Keras and Tensorflow. I have to do frequent calls to model.predict() on single inputs.
在对一个简单的预训练模型进行测试推断时,我注意到使用Keras的model.predict比在存储的权重上仅使用Numpy慢.为什么这么慢,我如何加速呢?对于复杂的模型,使用纯Numpy不可行.
While testing inference on a simple pretrained model, I noticed that using Keras' model.predict is WAY slower than just using Numpy on stored weights. Why is it that slow and how can I accelerate it? Using pure Numpy is not viable for complex models.
import timeit
import numpy as np
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense
w = np.array([[-1., 1., 0., 0.], [0., 0., -1., 1.]]).T
b = np.array([ 15., -15., -21., 21.])
model = Sequential()
model.add(Dense(4, input_dim=2, activation='linear'))
model.layers[0].set_weights([w.T, b])
model.compile(loss='mse', optimizer='adam')
state = np.array([-23.5, 17.8])
def predict_very_slow():
return model.predict(state[np.newaxis])[0]
def predict_slow():
ws = model.layers[0].get_weights()
return np.matmul(ws[0].T, state) + ws[1]
def predict_fast():
return np.matmul(w, state) + b
print(
timeit.timeit(predict_very_slow, number=10000),
timeit.timeit(predict_slow, number=10000),
timeit.timeit(predict_fast, number=10000)
)
# 5.168972805004538 1.6963867129435828 0.021918574168087623
# 5.461319456664639 1.5491559107269515 0.021502970783442876
推荐答案
有点晚了,但可能对某人有用:
A little late, but maybe useful for someone:
用model.predict(X, batch_size=len(X))
应该这样做.
这篇关于为什么TF Keras推理比Numpy运算慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文