张量点:使用Python进行深度学习 [英] Tensor dot: deep learning with python
问题描述
我目前正在阅读《 Python深度学习》 ,在该书中我不确定作者试图说什么,第42页.链接为
I am currently reading Deep Learning with Python where I am not sure what the author is trying to say on page 42. The link is here
更一般而言,您可以在高维张量之间取点积,遵循与前面针对2D情况概述的形状兼容性相同的规则:
More generally, you can take the dot product between higher-dimensional tensors, following the same rules for shape compatibility as outlined earlier for the 2D case:
(a, b, c, d) . (d,) -> (a, b, c)
(a, b, c, d) . (d, e) -> (a, b, c, e)
不确定他想在这里说什么.我确实了解矩阵乘法的工作原理,但是上面两行代码不清楚.
Not sure what he is trying to say here. I do understand how matrix multiplication works but the above two lines of code is not clear.
推荐答案
按照这种表示法,矩阵乘法是
Following this notation, matrix multiplication is
(a, b) * (b, c) -> (a, c)
当第二个矩阵是向量时,它简化为
When the second matrix is a vector, it simplifies to
(a, b) * (b, ) -> (a, )
现在,当第一个或第二个矩阵具有额外维度时,本书中的公式仅说明如何扩展此操作.重要的是,两者都必须具有匹配的尺寸(最后一个暗号==第一个暗号,而无需重塑),张量可以沿着该尺寸相乘,从而消除了该尺寸.因此,结果形状的公式为:
Now, the formulas from the book simply explain how to extend this operation, when the first or second matrix has extra dimensions. It's important that both have a matching dimension (the last dim == the first dim, without reshaping), along which the tensors can be multiplied, eliminating this dimension. Hence, the formula for the result shape:
(a, b, c, d) * (d, e) -> (a, b, c, e)
这篇关于张量点:使用Python进行深度学习的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!