scikit.svm.SRV.predict(X) 的复制 [英] Replication of scikit.svm.SRV.predict(X)

查看:65
本文介绍了scikit.svm.SRV.predict(X) 的复制的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试复制 scikit-learn 的 svm.svr.predict(X),但不知道如何正确执行.

我想做的是,因为在使用 RBF 内核训练 SVM 后,我想在另一种编程语言 (Java) 上实现预测,并且我需要能够导出模型的参数才能执行预测未知案例.

在 scikit 的文档页面上,我看到有 'support_ 和 'support_vectors_ 属性,但不明白如何复制 .predict(X) 方法.

形式为 y_pred = f(X,svm.svr.support_, svm.svr.support_vectors_,etc,...) 的解决方案是我正在寻找的.

先谢谢你!

它的 SVM 用于回归,而不是分类!

这是我现在正在尝试的代码,来自 计算决策函数手动支持向量机但没有成功...

from sklearn import svm导入数学将 numpy 导入为 npX = [[0, 0], [1, 1], [1,2], [1,2]]y = [0, 1, 1, 1]clf = svm.SVR(gamma=1e-3)clf.fit(X, y)Xtest = [0,0]打印 'clf.decision_function:'打印 clf.decision_function(Xtest)sup_vecs = clf.support_vectors_dual_coefs = clf.dual_coef_伽玛 = clf.gamma拦截 = clf.intercept_diff = sup_vecs - Xtest# 向量化方法norm2 = np.array([np.linalg.norm(diff[n, :]) for n in range(np.shape(sup_vecs)[0])])dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - 截距)打印 'decision_function 复制:'打印 dec_func_vec

两种方法得到的结果不同,为什么??

clf.decision_function:[[0.89500898]]决策函数复制:[0.89900498]

解决方案

感谢 B@rmaley.exe 的贡献,我找到了手动复制 SVM 的方法.我不得不更换

 dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - 截距)

 dec_func_vec = (dual_coefs.dot(np.exp(-gamma*(norm2**2))) + 截距)

所以,完整的矢量化方法是:

 # 向量化方法norm2 = np.array([np.linalg.norm(diff[n, :]) for n in range(np.shape(sup_vecs)[0])])dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - 截距)

I'm trying to replicate scikit-learn's svm.svr.predict(X) and don't know how to do it correctly.

I want to do is, because after training the SVM with an RBF kernel I would like to implement the prediction on another programming language (Java) and I would need to be able to export the model's parameters to be able to perform predictions of unknown cases.

On scikit's documentation page, I see that there are 'support_ and 'support_vectors_ attributes, but don't understand how to replicate the .predict(X) method.

A solution of the form y_pred = f(X,svm.svr.support_, svm.svr.support_vectors_,etc,...) is what I am looking for.

Thank you in advance!

Edit: Its SVM for REGRESSION, not CLASSIFICATION!

Edit: This is the code I am trying now, from Calculating decision function of SVM manually with no success...

from sklearn import svm
import math
import numpy as np

X = [[0, 0], [1, 1], [1,2], [1,2]]
y = [0, 1, 1, 1]
clf = svm.SVR(gamma=1e-3)
clf.fit(X, y)
Xtest = [0,0]
print 'clf.decision_function:'
print clf.decision_function(Xtest)

sup_vecs = clf.support_vectors_
dual_coefs = clf.dual_coef_
gamma = clf.gamma
intercept = clf.intercept_

diff = sup_vecs - Xtest

# Vectorized method
norm2 = np.array([np.linalg.norm(diff[n, :]) for n in range(np.shape(sup_vecs)[0])])
dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - intercept)
print 'decision_function replication:'
print dec_func_vec

The results I'm getting are different for both methods, WHY??

clf.decision_function:
[[ 0.89500898]]
decision_function replication:
[ 0.89900498]

解决方案

Thanks to the contribution of B@rmaley.exe, I found the way to replicate SVM manually. I had to replace

    dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - intercept)

with

    dec_func_vec = (dual_coefs.dot(np.exp(-gamma*(norm2**2))) + intercept)

So, the full vectorized method is:

    # Vectorized method
    norm2 = np.array([np.linalg.norm(diff[n, :]) for n in range(np.shape(sup_vecs)[0])])
    dec_func_vec = -1 * (dual_coefs.dot(np.exp(-gamma*(norm2**2))) - intercept)

这篇关于scikit.svm.SRV.predict(X) 的复制的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆