欧米茄在连续过度松弛率法中的意义是什么? [英] what is the significance of omega in successive over relaxation rate method?

查看:59
本文介绍了欧米茄在连续过度松弛率法中的意义是什么?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有以下矩阵

我已将其转换为严格占优矩阵,并应用了Guass-Siedel和omega = 1.1的逐次超松弛率方法以及epsilon = 1e-4的容差,其收敛公式如下:

通过使用python手动解决此问题(不使用线性代数库),我发现这两种方法都具有相同的迭代次数(6),但是据我了解,矩阵是否在Gauss-Siedel和1 <​​omega <; 2对于连续过度松弛率法,那么SOR方法应该减少较少的迭代次数?

那么,我的理解正确吗?SOR方法是否必须减少迭代次数?

解决方案

这实际上是我在尝试解决同一问题时遇到的一个问题.在这里,我将包括来自GS和SOR方法的第6次迭代的结果,并将分析我对这种情况的看法.对于这两个初始向量x =(0,0,0,0).实际上,我们看到每种方法的L无限范数都不同(请参见下文).

对于高斯-赛德尔(Gauss-Seidel):

 迭代6中的求解向量为:[[1.0001][2.][-1.][1.]迭代6中的L无限范数是:[4.1458e-05] 

对于SOR:

 迭代6中的求解向量为:[[1.0002][2.0001][-1.0001][1.]迭代6中的L无穷范数为:[7.8879e-05] 

从学术上讲,"SOR可以提供一种方便的方法来加快解决线性系统的Jacobian方法和Gauss-Seidel方法的速度.参数ω称为弛豫参数.显然,对于ω= 1,我们恢复了原始方程.如果ω<1我们讨论的是松弛松弛,这对于某些在正常雅可比松弛下不会收敛的系统可能很重要.如果ω>1,我们过分放松,对此我们将更加关注.在多年的手工计算中发现,如果我们超越高斯-塞德尔校正,收敛速度会更快.粗略地说,这些近似值位于解x的同一侧.过松弛因子ω使我们更接近解.当ω= 1时,我们恢复高斯-塞德尔;ω>在图1中,该方法称为SOR.ω的最佳选择永远不会超过2.它通常在1.9附近.

关于ω的更多信息,您还可以参考线性代数及其应用"一书的Strang,G.,2006第410页.以及到纸张可能会产生更好的结果,因为过度松弛的整个目的就是发现这个最优ω.(我再次相信,这个1.1并不是最佳的欧米茄,一旦我进行了计算,就会更新您).该图像来自Strang,G.,2006,线性代数及其应用".第四版第411页.

的确,通过运行omega的图形表示-SOR中的迭代,似乎我的最佳omega在1.0300到1.0440的范围内,并且这些omegas的整个范围为我提供了五次迭代,这是一种更有效的方法比纯的Gauss-Seidel(ω= 1)产生6次迭代.

I have the following matrix

I have transformed this to strictly dominant matrix and applied Guass-Siedel and Successive over relaxation rate method with omega=1.1 and tolerance of epsilon=1e-4 with convergence formula as below

By solving this using python manually(not using linear algebra library) i found that both the methods are taking same number of iterations(6), but as per my understanding if the matrix is convergent in Gauss-Siedel and 1<omega<2 for successive over relaxation rate method then SOR method should take less number of iterations which is not happening?

so, is my understanding correct? is it mandatory for SOR method to take less number of iterations?

解决方案

This is actually a question I had myself as I was trying to solve the same problem. Here I will include my results from the 6th iteration from both GS and SOR methods and will analyze my opinion on why this is the case. For both the initial vector x = (0,0,0,0). Practically speaking we see that the L infinity norm is different for each method (see below).

For Gauss-Seidel:

The solution vector in iteration 6 is: 
[[ 1.0001]
[ 2.    ]
[-1.    ]
[ 1.    ]]
The L infinity norm in iteration 6 is: [4.1458e-05]

For SOR:

The solution vector in iteration 6 is: 
[[ 1.0002]
[ 2.0001]
[-1.0001]
[ 1.    ]]
The L infinity norm in iteration 6 is: [7.8879e-05]

Academically speaking "SOR can provide a convenient means to speed up both the Jacobian and Gauss-Seidel methods of solving the our linear system. The parameter ω is referred to as the relaxation parameter. Clearly for ω = 1 we restore the original equations. If ω < 1 we talk of under-relaxation, and this can be important for some systems which will not converge under normal Jacobian relaxation. If ω > 1, we have over-relaxation, with which we will be more concerned. It was discovered during the years of hand computation that convergence is faster if we go beyond the Gauss-Seidel correction. Roughly speaking, those approximations stay on the same side of the solution x. An overrelaxation factor ω moves us closer to the solution. With ω = 1, we recover Gauss-Seidel; with ω > 1, the method is known as SOR. The optimal choice of ω never exceeds 2. It is often in the neighborhood of 1.9."

For more information on the ω you can also refer to Strang, G., 2006 page 410 of the book "Linear Algebra and its applications" as well as to the paper A rapid finite difference algorithm, utilizing successive over‐relaxation to solve the Poisson–Boltzmann equation.

Based on the academic description above I believe that both of these methods have 6 iterations because 1.1 is not the optimal ω value. Changing ω to a value closer to could yield a better result, as the whole point of overrelaxation is to discover this optimal ω. (My belief again is that this 1.1 is not the optimal omega and will update you once I do the calculation). The image is from Strang, G., 2006 "Linear algebra and its applications" 4th edition page 411.

Edit: Indeed by running a graphical representation of omega - iterations in SOR it seems that my optimal omega is in the range of 1.0300 to 1.0440, and the whole range of these omegas gives me five iterations, which is a more efficient way than pure Gauss-Seidel at omega = 1 that gives 6 iterations.

这篇关于欧米茄在连续过度松弛率法中的意义是什么?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆