增量共轭梯度法 [英] incremental conjugate gradient algorithm
问题描述
我写了这片code,应该使这里所描述的:
I wrote this piece of code that should make what is described here:
<一个href="http://en.wikipedia.org/wiki/Conjugate_gradient_method#The_conjugate_gradient_method_as_an_iterative_method"相对=nofollow>共轭梯度从维基百科
但一些迭代后变量
denomAlpha
变为零,所以我得到一个楠
goes to zero and so I get a NAN on
alpha
那么,什么是错的我的算法?
so what is wrong with my algorithm ?
import Jama.Matrix;
public class ConjugateGrad {
private static final int MAX_IT = 20;
private static final int MAX_SIZE = 50;
public static void main(String[] args)
{
Matrix A = Matrix.random(MAX_SIZE, MAX_SIZE);
Matrix b = Matrix.random(MAX_SIZE, 1);
double[][] d = new double[MAX_SIZE][1];
for(int ii=0;ii<MAX_SIZE;ii++)
d[ii][0] =0;
Matrix x = Matrix.constructWithCopy(d);
Matrix r = b.minus(A.times(x));
Matrix p = r;
Matrix rTrasp_r = r.transpose().times(p);
for (int i = 0; i < MAX_IT; i++)
{
//System.out.println("\nIteration: \t" + i);
Matrix denomAlpha = p.transpose().times(A.times(p));
double numeratorAlpha = rTrasp_r.getArray()[0][0];
double Alpha = numeratorAlpha / denomAlpha.getArray()[0][0];
System.out.println("Alpha: \t" + Alpha);
x = x.plus(p.times(Alpha));
r = r.minus(A.times(p));
Matrix rNew = r.transpose().times(r);
if(Math.sqrt(rNew.getArray()[0][0]) <1.0e-6) {
System.out.println("\nStop at iteration: " + i);
System.out.println("Norm is: " + rNew.norm1());
break;
}
double Beta = rNew.getArray()[0][0] / rTrasp_r.getArray()[0][0];
p = r.plus(p.times(Beta));
rTrasp_r = rNew;
System.out.println("x: \n" + x);
//for(int j=0; j < x.getArray().length;j++)
// System.out.print("- "+ x.getArray()[j][0]);
}
System.out.println("x: \n" + x);
System.out.println("\nb - A*x :");
}
}
这同样与这些参数:
it same that with those parameters :
double[][] matrixA = {{4,1},{1,3}};
Matrix A = Matrix.constructWithCopy(matrixA);
double[][] vectorb = {{1},{2}};
Matrix b = Matrix.constructWithCopy(vectorb);
double[][] d = {{2},{1}};
Matrix x = Matrix.constructWithCopy(d);
目前的算法事情的第一步是好的 但在第二个步骤不...
at first step of the algorithm things are good but at second step not...
r为:-8.0,-3.0,
r is: -8.0, -3.0,
阿尔法:0.22054380664652568
Alpha: 0.22054380664652568
测试版:12.67123287671233
Beta: 12.67123287671233
x为:0.2356495468277946,0.33836858006042303,
x is: 0.2356495468277946, 0.33836858006042303,
second step :
阿尔法:0.0337280177221555
Alpha: 0.0337280177221555
测试版:159.11259655226627
Beta: 159.11259655226627
x是:-2.2726985108925097,-0.47156587291133856,
x is: -2.2726985108925097, -0.47156587291133856,
好,我已经找到了一个错误:
ok i have found one Error:
r = r.minus(A.times(p).times(Alpha));
现在的工作:
now work:
r为:-8.0,-3.0,
r is: -8.0, -3.0,
阿尔法:0.22054380664652568
Alpha: 0.22054380664652568
RNEW:0.6403099643121183,
rNew: 0.6403099643121183,
测试版:0.008771369374138607
Beta: 0.008771369374138607
号码:-0.3511377223647101,0.7229306048685207,
p: -0.3511377223647101, 0.7229306048685207,
x为:0.2356495468277946,0.33836858006042303,
x is: 0.2356495468277946, 0.33836858006042303,
推荐答案
对不起,黑客答案,但...使用从维基百科的文章中的数值例子PARAMS和输出矩阵终端在每一步能找到的差异。
Sorry for the hack answer but... using the numerical example params from the Wikipedia article and outputting the matrices to terminal at each step could find the discrepancy.
这篇关于增量共轭梯度法的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!