Python特征向量:numpy.linalg,scipy.linalg和scipy.sparse.linalg之间的差异 [英] Python eigenvectors: differences among numpy.linalg, scipy.linalg and scipy.sparse.linalg

查看:775
本文介绍了Python特征向量:numpy.linalg,scipy.linalg和scipy.sparse.linalg之间的差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

Scipy和Numpy在它们之间具有三个不同的函数,用于查找给定方阵的特征向量,这些函数是:

Scipy and Numpy have between them three different functions for finding eigenvectors for a given square matrix, these are:

  1. numpy.linalg.eig(a)
  2. scipy.linalg.eig(a) ,以及
  3. scipy.sparse.linalg.eig(A, k)

专门针对以下情况:我遗漏了最后两个的所有可选参数均保留默认值,并且a/A是实值,我很好奇这三个参数之间的区别在文档中含糊不清-尤其是:

Focusing specifically on the situation that all the optional arguments I've left off the last two are left at their defaults and that a/A is real-valued, I am curious about the differences among these three which are ambiguous from the documentation - especially:

  • 为什么(3)有一个注释,说明找不到所有特征向量?
  • 为什么另外两个必须 计算所有解决方案-为什么它们不接受k自变量?
  • (1)有一条注释,说明特征值不按特定顺序返回; (3)有一个可选参数来控制顺序. (2)是否对此做任何保证?
  • (3)是否假定A是稀疏的? (从数学上讲,而不是表示为稀疏的稀疏矩阵),如果这种假设不成立,它是否会效率低下,甚至给出错误的结果?
  • 在这些条件中进行选择时,我还应该考虑其他因素吗?
  • Why does (3) have a note that it can't find all eigenvectors?
  • Why must the other two compute all solutions - why don't they take a k argument?
  • (1) has a note saying that the eigenvalues are returned in no particular order; (3) has an optional argument to control the order. Does (2) make any guarantees about this?
  • Does (3) assume that A is sparse? (mathematically speaking, rather than being represented as a scipy sparse matrix) Can it be inefficient, or even give wrong results, if this assumption doesn't hold?
  • Are there other factors I should consider when choosing among these?

推荐答案

第三个问题的特殊行为与 Lanczos算法,在稀疏矩阵上效果很好. scipy.sparse.linalg.eig的文档说它为ARPACK使用包装器,而后者又使用了隐式重新启动Arnoldi方法(IRAM),或者在对称矩阵的情况下,使用Lanczos算法的相应变体". (1).

The special behaviour of the third one has to do with the Lanczos algorithm, which works very well with sparse matrices. The documentation of scipy.sparse.linalg.eig says it uses a wrapper for ARPACK, which in turn uses "the Implicitly Restarted Arnoldi Method (IRAM) or, in the case of symmetric matrices, the corresponding variant of the Lanczos algorithm." (1).

现在,Lanczos算法具有以下特性:它适用于大特征值(实际上,它使用最大特征值):

Now, the Lanczos algorithm has the property that it works better for large eigenvalues (in fact, it uses the maximum eigenvalue):

实际上,这种简单的算法不适用于 因为有舍入误差,所以计算了很多特征向量 将倾向于引入更重要的轻微组成部分 特征向量返回到计算中,降低了精度 计算. (2)

In practice, this simple algorithm does not work very well for computing very many of the eigenvectors because any round-off error will tend to introduce slight components of the more significant eigenvectors back into the computation, degrading the accuracy of the computation. (2)

因此,虽然Lanczos算法只是一个近似值,但我猜想其他两种方法都使用算法来找到 exact 个特征值-而且似乎所有这些特征值都可能取决于所使用的算法,也是.

So, whereas the Lanczos algorithm is only an approximation, I guess the other two methods use algos to find the exact eigenvalues -- and seemingly all of them, which probably depends on the algorithms used, too.

这篇关于Python特征向量:numpy.linalg,scipy.linalg和scipy.sparse.linalg之间的差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆