最大似然估计伪代码 [英] Maximum Likelihood Estimate pseudocode

查看:36
本文介绍了最大似然估计伪代码的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我需要编写一个最大似然估计器来估计一些玩具数据的均值和方差.我有一个包含 100 个样本的向量,用 numpy.random.randn(100) 创建.数据应具有零均值和单位方差的高斯分布.

我查阅了维基百科和一些额外的资源,但由于我没有统计学背景,所以我有点困惑.

是否有最大似然估计器的伪代码?我有 MLE 的直觉,但我不知道从哪里开始编码.

Wiki 说采用对数似然的 argmax.我的理解是:我需要通过使用不同的参数来计算对数似然,然后我将采用给出最大概率的参数.我不明白的是:我首先在哪里可以找到参数?如果我随机尝试不同的平均值 &方差以获得高概率,我应该什么时候停止尝试?

解决方案

如果您进行最大似然计算,您需要采取的第一步如下: 假设一个依赖于某些参数的分布.由于您生成您的数据(您甚至知道您的参数),您告诉"您的程序假设为高斯分布.但是,您不会告诉程序您的参数(0 和 1),而是先让它们未知,然后再计算它们.

现在,你有了你的样本向量(我们称之为 x,它的元素是 x[0]x[100])你必须处理它.为此,您必须计算以下内容(f 表示

现在,您必须找到 μ 和 σ 的值,使上述野兽达到最大值.这样做是一项非常重要的任务,称为非线性优化.

您可以尝试以下一种简化方式:修复一个参数并尝试计算另一个参数.这样可以避免您同时处理两个变量.

I need to code a Maximum Likelihood Estimator to estimate the mean and variance of some toy data. I have a vector with 100 samples, created with numpy.random.randn(100). The data should have zero mean and unit variance Gaussian distribution.

I checked Wikipedia and some extra sources, but I am a little bit confused since I don't have a statistics background.

Is there any pseudo code for a maximum likelihood estimator? I get the intuition of MLE but I cannot figure out where to start coding.

Wiki says taking argmax of log-likelihood. What I understand is: I need to calculate log-likelihood by using different parameters and then I'll take the parameters which gave the maximum probability. What I don't get is: where will I find the parameters in the first place? If I randomly try different mean & variance to get a high probability, when should I stop trying?

解决方案

If you do maximum likelihood calculations, the first step you need to take is the following: Assume a distribution that depends on some parameters. Since you generate your data (you even know your parameters), you "tell" your program to assume Gaussian distribution. However, you don't tell your program your parameters (0 and 1), but you leave them unknown a priori and compute them afterwards.

Now, you have your sample vector (let's call it x, its elements are x[0] to x[100]) and you have to process it. To do so, you have to compute the following (f denotes the probability density function of the Gaussian distribution):

f(x[0]) * ... * f(x[100])

As you can see in my given link, f employs two parameters (the greek letters µ and σ). You now have to calculate the values for µ and σ in a way such that f(x[0]) * ... * f(x[100]) takes the maximum possible value.

When you've done that, µ is your maximum likelihood value for the mean, and σ is the maximum likelihood value for standard deviation.

Note that I don't explicitly tell you how to compute the values for µ and σ, since this is a quite mathematical procedure I don't have at hand (and probably I would not understand it); I just tell you the technique to get the values, which can be applied to any other distributions as well.

Since you want to maximize the original term, you can "simply" maximize the logarithm of the original term - this saves you from dealing with all these products, and transforms the original term into a sum with some summands.

If you really want to calculate it, you can do some simplifications that lead to the following term (hope I didn't mess up anything):

Now, you have to find values for µ and σ such that the above beast is maximal. Doing that is a very nontrivial task called nonlinear optimization.

One simplification you could try is the following: Fix one parameter and try to calculate the other. This saves you from dealing with two variables at the same time.

这篇关于最大似然估计伪代码的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆