最大似然估计器的精度 [英] Accuracy of maximum likelihood estimators

查看:165
本文介绍了最大似然估计器的精度的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这是一个用于比较Poisson分布的lambda参数的ML估计量的测试.

Here's a test for comparing ML estimators of the lambda parameter of a Poisson distribution.

with(data.frame(x=rpois(2000, 1.5), i=LETTERS[1:20]),
     cbind(cf=tapply(x, i, mean),
           iter=optim(rep(1, length(levels(i))), function(par) 
             -sum(x * log(par[i]) - par[i]), method='BFGS')$par))

第一列显示从封闭形式解中获得的ML估计量(供参考),而第二列显示通过使用BFGS方法最大化对数似然函数而获得的ML估计量.结果:

The first column shows the ML estimator obtained from the closed-form solution (for reference), while the second column shows the ML estimator obtained by maximizing a log-likelihood function using the BFGS method. Results:

    cf     iter
A 1.38 1.380054
B 1.61 1.609101
C 1.49 1.490903
D 1.47 1.468520
E 1.57 1.569831
F 1.63 1.630244
G 1.33 1.330469
H 1.63 1.630244
I 1.27 1.270003
J 1.64 1.641064
K 1.58 1.579308
L 1.54 1.540839
M 1.49 1.490903
N 1.50 1.501168
O 1.69 1.689926
P 1.52 1.520876
Q 1.48 1.479891
R 1.64 1.641064
S 1.46 1.459310
T 1.57 1.569831

可以看出,使用迭代优化方法获得的估计量可能与正确值有很大的出入.这是意料之中的事情,还是还有另一种(多维)优化技术可以产生更好的近似值?

It can be seen the estimators obtained with the iterative optimization method can deviate quite a lot from the correct value. Is this something to be expected or is there another (multi-dimensional) optimization technique that would produce a better approximation?

推荐答案

Chase提供的答案:

Answer provided by Chase:

通过

传递给control()reltol参数,您可以 调整收敛的阈值.你可以尝试玩 如有必要.

the reltol parameter which gets passed to control() lets you adjust the threshold of the convergence. You can try playing with that if necessary.

这是代码的修改版本,现在包括选项reltol=.Machine$double.eps,它将提供最大的准确性:

This is a modified version of the code now including the option reltol=.Machine$double.eps, which will give the greatest possible accuracy:

with(data.frame(x=rpois(2000, 1.5), i=LETTERS[1:20]),
     cbind(cf=tapply(x, i, mean),
           iter=optim(rep(1, length(levels(i))), function(par) 
             -sum(x * log(par[i]) - par[i]), method='BFGS',
             control=list(reltol=.Machine$double.eps))$par))

结果是:

    cf iter
A 1.65 1.65
B 1.54 1.54
C 1.80 1.80
D 1.44 1.44
E 1.53 1.53
F 1.43 1.43
G 1.52 1.52
H 1.57 1.57
I 1.61 1.61
J 1.34 1.34
K 1.62 1.62
L 1.23 1.23
M 1.47 1.47
N 1.18 1.18
O 1.38 1.38
P 1.44 1.44
Q 1.66 1.66
R 1.46 1.46
S 1.78 1.78
T 1.52 1.52

因此,优化算法所产生的错误(即cfiter之间的差)现在减小为零.

So, the error made by the optimization algorithm (ie. the difference between cf and iter) is now reduced to zero.

这篇关于最大似然估计器的精度的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆