GDA的对数似然函数(高斯判别分析) [英] Log likelihood function for GDA(Gaussian Discriminative analysis)

查看:186
本文介绍了GDA的对数似然函数(高斯判别分析)的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我难以理解Andrew Ng的CS229注释中给出的GDA的似然函数.

I am having trouble understanding the likelihood function for GDA given in Andrew Ng's CS229 notes.

l(φ,µ0,µ1,Σ)= log(从i到m的乘积){p(x(i)| y(i); µ0,µ1,Σ)p(y(i);φ) }

l(φ,µ0,µ1,Σ) = log (product from i to m) {p(x(i)|y(i);µ0,µ1,Σ)p(y(i);φ)}

链接为 http://cs229.stanford.edu/notes/cs229-notes2 .pdf 第5页.

对于线性回归,函数是从i到m p(y(i)| x(i); theta)的乘积 这对我来说很有意义. 为什么在这里有一个变化,说它是由p(x(i)| y(i)给出,然后乘以p(y(i); phi)? 预先感谢

For Linear regression the function was product from i to m p(y(i)|x(i);theta) which made sense to me. Why is there a change here saying it is given by p(x(i)|y(i) and that is multiplied by p(y(i);phi)? Thanks in advance

推荐答案

第5页的起始公式是

l(φ,µ0,µ1,Σ) = log <product from i to m> p(x_i, y_i;µ0,µ1,Σ,φ)

暂时省略参数φ,µ0,µ1,Σ,可以将其简化为

leaving out the parameters φ,µ0,µ1,Σ for now, that can be simplified to

l = log <product> p(x_i, y_i)

使用链式规则,您可以将其转换为任意一个

using the chain rule you can convert that to either

l = log <product> p(x_i|y_i)p(y_i)

l = log <product> p(y_i|x_i)p(x_i).

在第5页的公式中,φ移到了p(y_i),因为只有p(y)依赖于它.

In the page 5 formula, the φ is moved to p(y_i), because only p(y) depends on it.

可能性从联合概率分布p(x,y)而不是条件概率分布p(y|x)开始,这就是为什么GDA被称为生成模型(模型从x到y,从y到x),而逻辑回归是被认为是歧视性模型(从x到y的模型,单向).两者都有其优点和缺点.以下似乎还有一章关于这一点.

The likelihood starts with the joint probability distribution p(x,y) instead of the conditional probability distribution p(y|x), which is why GDA is called a generative model (models from x to y and from y to x), while logistic regression is considered a discriminatory model (models from x to y, one-way). Both have their advantages and disadvantages. There seems to be a chapter about that further below.

这篇关于GDA的对数似然函数(高斯判别分析)的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆