关于LIBSVM预测的概率估计 [英] Regarding Probability Estimates predicted by LIBSVM

查看:367
本文介绍了关于LIBSVM预测的概率估计的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试通过使用SVM分类器进行3类分类.我们如何解释LIBSVM预测的概率估计.它是基于实例到最大余量超平面的垂直距离吗?

I am attempting 3 class classification by using SVM classifier. How do we interpret the probabililty estimates predicted by LIBSVM. Is it based on perpendicular distance of the instance from the maximal margin hyperplane?.

请仔细理解LIBSVM分类器预测的概率估计的解释.首先调整参数Cgamma,然后使用-b选项在训练和测试中输出概率估计.

Kindly through some light on the interpretation of probability estimates predicted by LIBSVM classifier. Parameters C and gamma are first tuned and then probability estimates are outputted by using -b option with both training and testing.

推荐答案

多类SVM始终被分解为几个二进制分类器(通常是一组对所有分类器).任何二进制SVM分类器的决策函数都将一个(有符号的)距离输出到分离的超平面.简而言之,SVM将输入域映射到一维实数(决策值).预测标签由决策值的符号确定.从SVM模型获得概率输出的最常用技术是通过所谓的普氏缩放(LIBSVM作者的论文).

Multiclass SVM is always decomposed into several binary classifiers (typically a set of one vs all classifiers). Any binary SVM classifier's decision function outputs a (signed) distance to the separating hyperplane. In short, an SVM maps the input domain to a one-dimensional real number (the decision value). The predicted label is determined by the sign of the decision value. The most common technique to obtain probabilistic output from SVM models is through so-called Platt scaling (paper of LIBSVM authors).

它是基于实例到最大余量超平面的垂直距离吗?

Is it based on perpendicular distance of the instance from the maximal margin hyperplane?

是的.通过校准分类器决策值的逻辑函数,可以对输出此类一维实值的 Any 分类器进行后处理以产生概率.这与标准逻辑回归中的方法完全相同.

Yes. Any classifier that outputs such a one-dimensional real value can be post-processed to yield probabilities, by calibrating a logistic function on the decision values of the classifier. This is the exact same approach as in standard logistic regression.

这篇关于关于LIBSVM预测的概率估计的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆