如何获得Keras模型的运行时批处理大小 [英] how to obtain the runtime batch size of a Keras model
问题描述
Based on this post. I need some basic implementation help. Below you see my model using a Dropout layer. When using the noise_shape parameter, it happens that the last batch does not fit into the batch size creating an error (see other post).
原始模型:
def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
model = Sequential()
model.add(Masking(mask_value=MaskWert, input_shape=(X_train.shape[1],X_train.shape[2]) ))
model.add(Dropout(dropout, noise_shape=(batchsize, 1, X_train.shape[2]) ))
model.add(Dense(hidden_units, activation='sigmoid', kernel_constraint=max_norm(max_value=4.) ))
model.add(LSTM(hidden_units, return_sequences=True, dropout=dropout, recurrent_dropout=dropout))
现在Alexandre Passos建议使用 tf.shape 获得运行时批处理大小.我试图以不同的方式在Keras中实现运行时批处理大小的想法,但从未奏效.
Now Alexandre Passos suggested to get the runtime batchsize with tf.shape. I tried to implement the runtime batchsize idea it into Keras in different ways but never working.
import Keras.backend as K
def backend_shape(x):
return K.shape(x)
def LSTM_model(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
batchsize=backend_shape(X_train)
model = Sequential()
...
model.add(Dropout(dropout, noise_shape=(batchsize[0], 1, X_train.shape[2]) ))
...
但这确实给了我输入张量形状,但没有给我运行时输入张量形状.
But that did just give me the input tensor shape but not the runtime input tensor shape.
我还尝试使用Lambda层
I also tried to use a Lambda Layer
def output_of_lambda(input_shape):
return (input_shape)
def LSTM_model_2(X_train,Y_train,dropout,hidden_units,MaskWert,batchsize):
model = Sequential()
model.add(Lambda(output_of_lambda, outputshape=output_of_lambda))
...
model.add(Dropout(dropout, noise_shape=(outputshape[0], 1, X_train.shape[2]) ))
以及其他变体.但是,正如您已经猜到的那样,这根本没有用. 模型定义实际上是正确的位置吗? 您能给我一个提示还是更好地告诉我如何获取Keras模型的运行批处理大小?非常感谢.
And different variants. But as you already guessed, that did not work at all. Is the model definition actually the correct place? Could you give me a tip or better just tell me how to obtain the running batch size of a Keras model? Thanks so much.
推荐答案
当前实现确实根据运行时批处理大小进行调整.在Dropout
层实现中代码:
The current implementation does adjust the according to the runtime batch size. From the Dropout
layer implementation code:
symbolic_shape = K.shape(inputs)
noise_shape = [symbolic_shape[axis] if shape is None else shape
for axis, shape in enumerate(self.noise_shape)]
因此,如果您提供noise_shape=(None, 1, features)
,则形状将遵循上面的代码(runtime_batchsize,1个要素).
So if you give noise_shape=(None, 1, features)
the shape will be (runtime_batchsize, 1, features) following the code above.
这篇关于如何获得Keras模型的运行时批处理大小的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!