在TensorFlow中,函数'tf.one_hot'中的参数'axis'是什么 [英] In TensorFlow, what is the argument 'axis' in the function 'tf.one_hot'
问题描述
有人可以帮忙解释一下 TensorFlow
的 one_hot
函数中的 axis
吗?
Could anyone help with an an explanation of what axis
is in TensorFlow
's one_hot
function?
根据文档:
axis:要填充的轴(默认值:-1,一个新的最里面的轴)
axis: The axis to fill (default: -1, a new inner-most axis)
最近,我在 SO是与之相关的解释上得到了答案.熊猫:
Closest I came to an answer on SO was an explanation relevant to Pandas:
不确定上下文是否同样适用.
Not sure if the context is just as applicable.
推荐答案
下面是一个示例:
x = tf.constant([0, 1, 2])
...是输入张量, N = 4
(每个索引都转换为4D向量).
... is the input tensor and N=4
(each index is transformed into 4D vector).
计算 one_hot_1 = tf.one_hot(x,4).eval()
产生一个(3,4)
张量:
[[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]]
...,其中最后一个尺寸是一维编码的(清晰可见).这对应于默认的 axis = -1
,即 last 之一.
... where the last dimension is one-hot encoded (clearly visible). This corresponds to the default axis=-1
, i.e. the last one.
现在,计算 one_hot_2 = tf.one_hot(x,4,axis = 0).eval()
产生一个(4,3)
张量,它不是立即可识别为一键编码:
Now, computing one_hot_2 = tf.one_hot(x, 4, axis=0).eval()
yields a (4, 3)
tensor, which is not immediately recognizable as one-hot encoded:
[[ 1. 0. 0.]
[ 0. 1. 0.]
[ 0. 0. 1.]
[ 0. 0. 0.]]
这是因为单热编码是沿着0轴完成的,因此必须转置矩阵才能看到以前的编码.当输入的维数较高时,情况会变得更加复杂,但思想是相同的:区别在于用于一键编码的 extra 维的位置.
This is because the one-hot encoding is done along the 0-axis and one has to transpose the matrix to see the previous encoding. The situation becomes more complicated, when the input is higher dimensional, but the idea is the same: the difference is in placement of the extra dimension used for one-hot encoding.
这篇关于在TensorFlow中,函数'tf.one_hot'中的参数'axis'是什么的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!