Matlab del2与Matlab梯度梯度之间的差异 [英] Discrepancy between Matlab del2 and Matlab gradient of gradient

查看:583
本文介绍了Matlab del2与Matlab梯度梯度之间的差异的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

任何人都可以解释为什么我在Matlab中使用拉普拉斯算子获得如此显着不同的结果

can anyone explain why I get such dramatically different results for the Laplace operator in Matlab when I use

laplacian = del2(image);

[x, y] = gradient(image);
[xx, xy] = gradient(x);
[yx, yy] = gradient(y);
laplacian = xx + yy;

这些不应该是同一件事吗?当一个包含dx术语时,它们会变得特别不同。

Shouldn't these come to the same thing? They get particularly divergent when one includes a dx term.

如果有帮助,请将我的示例放在这里:我有一个由

Putting my example up here in case it helps: I have a test field consisting of

 [5; 2.5+2.5i; 5i; -2.5+2.5i; -5; -2.5-2.5i; -5i; 2.5-2.5i] 

乘以它的转置(如果有帮助,我可以发布整个矩阵)。该字段的del2()的内部块(3:6,3:6)是:

times its transpose (I can post the whole matrix if it helps). The inner block (3:6, 3:6) of the del2() of this field is:

[-2.5           -0.625-0.625i  -2.5i           0.625-0.625i ;
 -0.625+0.625i   0             -0.625+0.625i   0            ;
  2.5i          -0.625+0.625i  -2.5           -0.625+0.625i ;
  0.625+0.625i   0             -0.625+0.625i   0            ] 

内部区块( xx + yy的3:6,3:6是:

while the inner block (3:6, 3:6) of the xx + yy is:

[-5             -2.5-2.5i      -5i            -2.5-2.5i     ; 
 -2.5+2.5i      -2.5           -2.5-2.5i      -2.5i         ; 
  5i            -2.5+2.5i      -5             -2.5-2.5i     ; 
  2.5+2.5i       2.5i          -2.5+2.5i      -2.5          ]

作为你可以看到将在任何进一步的方程中产生显着的差异。可能任何人都有解释,非常感谢!

which as you can see will make a dramatic difference in any further equations. Might anyone have an explanation, thanks very much!

推荐答案

如果你仔细观察Matlab的文档,那么f的laplacian( x,y),del2(f(x,y))仅使用(x,y)及其最近邻居计算:x + 1,x-1,y + 1,y-1。

If you look closely at Matlab's documentation, the the laplacian of f at (x,y), del2(f(x,y)) is computed using only (x,y) and its nearest neighbours: x+1, x-1, y+1, y-1.

渐变函数(以及明确使用渐变函数的发散)也是如此。计算梯度两次涉及最近邻居的最近邻居。因此div(grad(f(x,y))实际上是使用(x,y)和x + 2,x-2,y + 2,y-2计算的。因此差异。

The same goes for the gradient function (and the divergence, which explicitly uses the gradient function). Computing the gradient twice involves the nearest neighbours of the nearest neighbours. Therefore div(grad(f(x,y)) is actually computed using (x,y) and x+2, x-2, y+2, y-2. Hence the difference.

网格间距越大,这两个计算之间的差异就越大。

The greater the grid spacing, the greater the discrepancy between these two calculations will be.

这篇关于Matlab del2与Matlab梯度梯度之间的差异的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆