贝叶斯学习-MAP假设 [英] Bayes Learning - MAP hypotesis

查看:192
本文介绍了贝叶斯学习-MAP假设的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一组hypotesys H = {h1,h2}互斥.对于它们,P(h1)= 0.2,p(h3)= 0.3(先验分布). 假设我们也知道

Suppose I have a set of hypotesys H = {h1, h2} mutual exclusive. For them P(h1) = 0.2 and p(h3) = 0.3 (prior distribution). Suppose we know also that

P(Y = 0 | h1)= 0.2 P(Y = 0 | h2)= 0.4

P(Y=0 | h1) = 0.2 P(Y=0 | h2) = 0.4

其中,Y是可以具有两个值{1,0}的属性(目标). 最后假设您观察到事件Y = 0.

where Y is an attribute (target) that can have two values {1,0}. Suppose finally that you observe the event Y = 0.

MAP(最大后验)hipotesys是哪个?

Which one is the MAP (Maximum a posteriori) hipotesys?

  • 地图是h1
  • MAP为h2
  • 没有足够的元素来查找MAP
  • MAP h1 = MAP h2
  • 没有一个以上可能的答案

推荐答案

应该在math.stackexchange.com或stats.stackexchange.com上问(现在可能已经迁移了)这样的问题.

Such question should be asked (and now probably migrated) on the math.stackexchange.com or stats.stackexchange.com .

您的问题是贝叶斯定理的基本应用

Your question is basic application of the Bayes Theorem

              P(Y=0|h1)P(h1)    0.2*0.2    0.04
P(h1|Y=0) =   -------------   = ------- = ------
                  P(Y=0)         P(Y=0)   P(Y=0)

              P(Y=0|h2)P(h2)    0.3*0.4    0.12
P(h2|Y=0) =   --------------  = ------- = ------
                  P(Y=0)         P(Y=0)   P(Y=0)

因此h2是更可能的假设,如P(Y=0)>0

So the h2 is the more probable hypothesis, as P(Y=0)>0

这篇关于贝叶斯学习-MAP假设的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆