特征向量转换的差异:Mathematica与SciPy [英] Difference in eigenvector transformations: Mathematica vs. SciPy

查看:109
本文介绍了特征向量转换的差异:Mathematica与SciPy的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以前在这里曾提出过类似的问题,但似乎没有人回答我的例子.我使用Mathematica和SciPy计算矩阵A的特征值和特征向量;特征值是一致的,但特征向量不是这种情况:

Similar questions have been asked previously here but none seem to answer my example. I compute the eigenvalues and eigenvectors of a matrix A using Mathematica and SciPy; the eigenvalues agree but this is not the case for the eigenvectors:

(1)最低(特征值)特征向量同意

(1) the lowest (eigenvalued) eigenvector agrees

(2)Mathematica和SciPy的其余对应特征向量与乘数无关

(2) the remaining corresponding eigenvectors of Mathematica and SciPy are not related by a multiplicative factor

(3)我可以使用外部乘积计算将SciPy特征向量发送到Mathematica对应特征向量的变换矩阵T

(3) I can compute the transformation matrix T sending SciPy's eigenvector to Mathematica's corresponding eigenvector using the outer product

T = numpy.outer(MathematicaEigenvector, SciPyEigenvector)

这样

MathematicaEigenvector = numpy.dot(T, SciPyEigenvector)

我希望所有SciPy-Mathematica特征向量对的变换矩阵T都应该相同,因为 T只是将矩阵inv(T).A.T的特征向量与原始矩阵A的特征向量相关的矩阵.但是,对每个特征向量对执行步骤(2)会得到不同的T矩阵.

I would expect that the transformation matrix T should be the same for all SciPy-Mathematica eigenvector pairs because T is simply the matrix relating the eigenvectors of the matrix inv(T).A.T to that of the original matrix A. However performing step (2) for each of the eigenvector pairs gives different T matrices.

有人可以解释吗?如果需要,我可以发布矩阵.

Can somebody explain this? I can post the matrices if required.

更新: python代码和矩阵如下:

UPDATE: The python code and matrices are as follows:

S = [[0., -1, -1, -1, 0, 0, -1, 0, 0], 
    [-1, 0., -1, 0, -1, 0, 0, -1, 0], 
    [-1, -1, 0., 0, 0, -1, 0, 0, -1], 
    [-1, 0, 0, 0., -1, -1, -1, 0, 0], 
    [0, -1, 0, -1, 0., -1, 0, -1, 0], 
    [0, 0, -1, -1, -1, 0, 0, 0, -1], 
    [-1, 0, 0, -1, 0, 0, 0., -1, -1], 
    [0, -1, 0, 0, -1, 0, -1, 0., -1], 
    [0, 0, -1, 0, 0, -1, -1, -1, 0.]];

eig_val,eig_vec = scipy.linalg.eig(S)
idx = eig_val.argsort()
eig_val = np.array(eig_val[idx])
eig_vec = np.array(eig_vec[:,idx])

Mathematica特征向量为:

The Mathematica eigenvectors are:

[-0.333333, -0.333333, -0.333333, -0.333333, -0.333333, -0.333333, -0.333333, -0.333333, -0.333333], 
[0.0385464, 0.570914,   0.371276, -0.570914, -0.0385464, -0.238184, -0.33273, 0.199638,   0.], 
[0.570246, -0.0269007, 0.197029,   0.0269007, -0.570246, -0.346316, 0.373217, -0.22393,   0.], 
[-0.0816497, 0.0816497, -0.489898, -0.0816497,   0.0816497, -0.489898, 0.408248, 0.571548,   0.], 
[-0.333333, -0.333333, 0.166667, -0.333333, -0.333333,   0.166667, 0.166667, 0.166667, 0.666667], 
[-0.288675, 0.288675,   2.498e-16, -0.288675, 0.288675, -1.94289e-16,   0.57735, -0.57735, 0.],
[-0.5, 0.5, -2.04678e-16, 0.5, -0.5,   2.41686e-16, -9.25186e-17, 5.55112e-17, 0.], 
[0.166667,   0.166667, -0.333333, 0.166667,   0.166667, -0.333333, -0.333333, -0.333333, 0.666667], 
[0.288675,   0.288675, -0.57735, -0.288675, -0.288675, 0.57735,   4.02456e-16, -2.08167e-16, 0.]

SciPy特征向量为:

Whereas the SciPy eigenvectors are:

[-0.33333333 -0.33333333 -0.33333333 -0.33333333 -0.33333333 -0.33333333 -0.33333333 -0.33333333 -0.33333333]
[ 0.12054181 -0.17813781  0.50013951  0.08577902 -0.21290061  0.4653767 -0.2872389  -0.58591853  0.0923588 ]
[ 0.12191583 -0.21327897  0.26215377 -0.28683603 -0.62203084 -0.1465981 0.35987707  0.02468226  0.500115  ]
[ 0.66666667  0.16666667  0.16666667  0.16666667 -0.33333333 -0.33333333 0.16666667 -0.33333333 -0.33333333]
[-0.16604424 -0.59504716 -0.43689399  0.43294845  0.00394553  0.16209871 0.43294845  0.00394553  0.16209871]
[-0.01305419  0.07446538 -0.0614112  -0.54881726  0.36347168  0.18534558 0.56187145 -0.43793706 -0.12393438]
[-0.66666667  0.33333333  0.33333333  0.33333333 -0.16666667 -0.16666667 0.33333333 -0.16666667 -0.16666667]
[-0.21052033  0.65306873 -0.4425484   0.10526016 -0.32653437  0.2212742 0.10526016 -0.32653437  0.2212742 ]
[-0.02303417  0.0714558  -0.04842162  0.09679298  0.41311466 -0.50990763 -0.0737588  -0.48457045  0.55832926]
[ 4.67737437  0.12612917  0.75157798 -0.09378424  0.91674876  2.36234989 1.03706802 -9.0725069   0.        ]

以上两者均由特征值排序 [-4.+ 0.j,-1.+ 0.j,-1.+ 0.j,-1.+ 0.j,-1.+ 0.j,2.+ 0.j,2. + 0.j,2.+ 0.j,2.+ 0.j]

Both the above are ordered by the eigenvalues [-4.+0.j, -1.+0.j, -1.+0.j, -1.+0.j, -1.+0.j, 2.+0.j, 2.+0.j, 2.+0.j, 2.+0.j]

推荐答案

我相信原因如下:由于存在重复的特征值,因此变换矩阵T必须对该子空间中特征向量的线性组合起作用,而不是与单个特征值.也就是说,我的第一个代码段应修改为:

I believe the reason is as follows: because there are repeated eigenvalues the transformation matrix T must act on a linear combination of the eigenvectors in that subspace as opposed to individual eigenvalues. That is, my first code snippet should be modified to:

T = numpy.outer(MathematicaEigenvectorSubspace, SciPyEigenvectorSubspace)

尽管我没有通过查找使两个子空间相等的线性组合来检查这是否显式起作用.

I haven't checked if this works explicitly though by finding the linear combination that makes the two subspaces equivalent.

这篇关于特征向量转换的差异:Mathematica与SciPy的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆