了解张量点 [英] Understanding tensordot

查看:123
本文介绍了了解张量点的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在我学会了如何使用einsum之后,现在我试图了解np.tensordot的工作原理.

After I learned how to use einsum, I am now trying to understand how np.tensordot works.

但是,我有些失落,尤其是关于参数axes的各种可能性.

However, I am a little bit lost especially regarding the various possibilities for the parameter axes.

要理解它,因为我从未练习过张量演算,所以使用以下示例:

To understand it, as I have never practiced tensor calculus, I use the following example:

A = np.random.randint(2, size=(2, 3, 5))
B = np.random.randint(2, size=(3, 2, 4))

在这种情况下,可能的np.tensordot有什么不同?您将如何手动计算它?

In this case, what are the different possible np.tensordot and how would you compute it manually?

推荐答案

使用tensordot的想法非常简单-我们输入要进行总和减少的数组和相应的轴.参与求和约简的轴在输出中被删除,并且输入数组中的所有其余轴都被展开,因为输出中的不同轴保持输入数组的顺序喂.

The idea with tensordot is pretty simple - We input the arrays and the respective axes along which the sum-reductions are intended. The axes that take part in sum-reduction are removed in the output and all of the remaining axes from the input arrays are spread-out as different axes in the output keeping the order in which the input arrays are fed.

让我们看一下带有一和两个求和轴的两个示例案例,还交换输入位置,以及如何在输出中保持顺序.

Let's look at few sample cases with one and two axes of sum-reductions and also swap the input places and see how the order is kept in the output.

输入:

 In [7]: A = np.random.randint(2, size=(2, 6, 5))
   ...:  B = np.random.randint(2, size=(3, 2, 4))
   ...: 

案例1:

In [9]: np.tensordot(A, B, axes=((0),(1))).shape
Out[9]: (6, 5, 3, 4)

A : (2, 6, 5) -> reduction of axis=0
B : (3, 2, 4) -> reduction of axis=1

Output : `(2, 6, 5)`, `(3, 2, 4)` ===(2 gone)==> `(6,5)` + `(3,4)` => `(6,5,3,4)`

第2种情况(与第1种情况相同,但是交换了输入内容):

Case #2 (same as case #1 but the inputs are fed swapped):

In [8]: np.tensordot(B, A, axes=((1),(0))).shape
Out[8]: (3, 4, 6, 5)

B : (3, 2, 4) -> reduction of axis=1
A : (2, 6, 5) -> reduction of axis=0

Output : `(3, 2, 4)`, `(2, 6, 5)` ===(2 gone)==> `(3,4)` + `(6,5)` => `(3,4,6,5)`.

II.求和的两个轴

输入:

In [11]: A = np.random.randint(2, size=(2, 3, 5))
    ...: B = np.random.randint(2, size=(3, 2, 4))
    ...: 

案例1:

In [12]: np.tensordot(A, B, axes=((0,1),(1,0))).shape
Out[12]: (5, 4)

A : (2, 3, 5) -> reduction of axis=(0,1)
B : (3, 2, 4) -> reduction of axis=(1,0)

Output : `(2, 3, 5)`, `(3, 2, 4)` ===(2,3 gone)==> `(5)` + `(4)` => `(5,4)`

案例2:

In [14]: np.tensordot(B, A, axes=((1,0),(0,1))).shape
Out[14]: (4, 5)

B : (3, 2, 4) -> reduction of axis=(1,0)
A : (2, 3, 5) -> reduction of axis=(0,1)

Output : `(3, 2, 4)`, `(2, 3, 5)` ===(2,3 gone)==> `(4)` + `(5)` => `(4,5)`

我们可以将其扩展到尽可能多的轴.

We can extend this to as many axes as possible.

这篇关于了解张量点的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆