Python张量积 [英] Python tensor product

查看:265
本文介绍了Python张量积的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下问题.出于性能原因,我使用numpy.tensordot,因此将我的值存储在张量和向量中. 我的计算之一如下:

I have the following problem. For performance reasons I use numpy.tensordot and have thus my values stored in tensors and vectors. One of my calculations look like this:

<w_j>w_j的期望值,<sigma_i>sigma_i的期望值. (也许我现在应该称它为sigma,因为它与标准偏差无关)现在,为了进行进一步的计算,我还需要方差.为了获得差异,我需要计算:

<w_j> is the expectancy value of w_j and <sigma_i> the expectancy value of sigma_i. (Perhaps I should now have called is sigma, because it has nothing to do with standart deviation) Now for further calculations I also need the variance. To the get Variance I need to calculate:

现在,当我使用numpy.tensordot在python中实现第一个公式时,我真的很高兴,因为它很抽象,而且我不习惯张量.该代码确实是这样的:

Now when I implemented the first formula into python with numpy.tensordot I was really happy when it worked because this is quite abstract and I am not used to tensors. The code does look like this:

erc = numpy.tensordot(numpy.tensordot(re, ewp, axes=1), ewp, axes=1)

现在这可行,我的问题是为第二个公式写下正确的格式.我的尝试之一是:

Now this works and my problem is to write down the correct form for the second formula. One of my attempts was:

serc = numpy.tensordot(numpy.tensordot(numpy.tensordot(numpy.tensordot
(numpy.tensordot(re, re, axes=1), ewp, axes=1), ewp, axes=1)
, ewp, axes=1), ewp, axes=1)

但这确实给了我一个标量,而不是一个向量.另一个尝试是:

But this does give me a scalar instead of a vector. Another try was:

serc = numpy.einsum('m, m', numpy.einsum('lm, l -> m',
numpy.einsum('klm, k -> lm', numpy.einsum('jklm, j -> klm',
numpy.einsum('ijk, ilm -> jklm', re, re), ewp), ewp), ewp), ewp)

向量的长度为l,张量的维数为l * l * l.我希望我的问题是可以理解的,并在此先感谢您!

The vectors have lenght l and the dimension of the tensor is l * l * l. I hope my problem is understandable and thank you in advance!

第一个公式也可以在python中写下,例如:erc2 = numpy.einsum('ik, k -> i', numpy.einsum('ijk, k -> ij', re, ewp), ewp)

The first formula can in python also written down like: erc2 = numpy.einsum('ik, k -> i', numpy.einsum('ijk, k -> ij', re, ewp), ewp)

推荐答案

您可以通过一系列简化来做到这一点-

You could do that with a series of reductions, like so -

p1 = np.tensordot(re,ewp,axes=(1,0))
p2 = np.tensordot(p1,ewp,axes=(1,0))
out = p2**2

说明

首先,我们可以将其分为两组操作:

First off, we could separate it out into two groups of operations :

Group 1: R(i,j,k) , < wj > , < wk > 
Group 2: R(i,l,m) , < wl > , < wl > 

在这两组中执行的操作是相同的.因此,人们可以为一组计算,并以此为基础得出最终输出.

The operations performed within these two groups are identical. So, one could compute for one group and derive the final output based off it.

现在,要计算R(i,j,k),< wj>,< wk并以(i)结尾,我们需要沿着R的第二轴和第三轴与w进行元素逐个相乘,然后沿着这些轴执行sum-reduction.在这里,我们分两个步骤使用两个tensordots-

Now, to compute R(i,j,k) , < wj >, < wk and end up with (i) , we need to perform element-wise multiplication along the second and third axes of R with w and then perform sum-reduction along those axes. Here, we are doing it in two steps with two tensordots -

[1] R(i,j,k) , < wj > to get p1(i,k)
[2] p1(i,k) , < wk > to get p2(i)

因此,我们得到一个向量p2.与第二组类似,结果将是相同的向量.因此,要获得最终输出,我们只需要对该向量求平方即可,即p**2.

Thus, we end up with a vector p2. Similarly with the second group, the result would be an identical vector. So, to get to the final output, we just need to square that vector, i.e. p**2.

这篇关于Python张量积的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆