keras.Model 中 Input 的附加维度从何而来? [英] where does the additional dimension of the Input in a keras.Model come from?
问题描述
当我定义一个模型时:
import tensorflow as tf
from tensorflow.keras import layers
import numpy as np
input_shape = (20,20)
input = tf.keras.Input(shape=input_shape)
nn = layers.Flatten()(input)
nn = layers.Dense(10)(nn)
output = layers.Activation('sigmoid')(nn)
model = tf.keras.Model(inputs=input, outputs=output)
为什么我需要在实际输入中添加另一个维度:
Why do I need to add another dimension to my actual input:
actual_input = np.ones((1,20,20))
prediction = model.predict(actual_input)
为什么我不能只做 actual_input = np.ones((20,20))
?
在 docs 中,它说了一些关于批量大小的内容.是这个批量大小与我的问题有关吗?如果是这样,当我想用我的模型进行预测时,我为什么需要它?感谢您的帮助.
in the docs it says something about batchsize.. Is this batchsize somehow related to my question? If so, why would I need it, when I want to predict with my model? Thanks for any help.
推荐答案
在 Keras
(TensorFlow
) 中,无法预测单个输入.因此,即使您只有一个示例,也需要将 batch_axis
添加到其中.
In Keras
(TensorFlow
), one cannot predict on a single input. Therefore, even if you have a single example, you need to add the batch_axis
to it.
实际上,在这种情况下,批次大小为 1,因此是批次轴.
Practically, in this situation, you have a batch size of 1, hence the batch axis.
这就是 TensorFlow
和 Keras
的构建方式,即使对于单个预测,您也需要添加批处理轴(批处理大小为 1 == 1 个单个示例).
This is how TensorFlow
and Keras
are built, and even for a single prediction you need to add the batch axis (batch size of 1 == 1 single example).
您可以使用 np.expand_dims(input,axis=0)
或 tf.expand_dims(input,axis=0)
将您的输入转换为合适的格式预测.
You can use np.expand_dims(input,axis=0)
or tf.expand_dims(input,axis=0)
to transform your input into a suitable format for prediction.
这篇关于keras.Model 中 Input 的附加维度从何而来?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!