SVM训练之外的Kernlab内核矩阵计算 [英] kernel matrix computation outside SVM training in kernlab
问题描述
我正在开发一种新算法,该算法会生成用于使用SVM进行训练的修改后的内核矩阵,但遇到了一个奇怪的问题.
I was developing a new algorithm that generates a modified kernel matrix for training with a SVM and encountered a strange problem.
出于测试目的,我比较了使用kernelMatrix接口和普通内核接口学习的SVM模型.例如,
For testing purposes I was comparing the SVM models learned using kernelMatrix interface and normal kernel interface. For example,
# Model with kernelMatrix computation within ksvm
svp1 <- ksvm(x, y, type="C-svc", kernel=vanilladot(), scaled=F)
# Model with kernelMatrix computed outside ksvm
K <- kernelMatrix(vanilladot(), x)
svp2 <- ksvm(K, y, type="C-svc")
identical(nSV(svp1), nSV(svp2))
请注意,由于我不确定如何对内核矩阵执行缩放,因此我已关闭了缩放功能.
Note that I have turned scaling off, as I am not sure how to perform scaling on kernel matrix.
据我了解,svp1
和svp2
应该返回相同的模型.但是,我发现对于某些数据集(例如,来自 KEEL 的glass0
)不是这样
From my understanding both svp1
and svp2
should return the same model. However I observed that this not true for a few datasets, for example glass0
from KEEL.
我在这里想念什么?
推荐答案
I think this has to do with same issue posted here. kernlab appears to treat the calculation of ksvm differently when explicitly using vanilladot() because it's class is 'vanillakernel' instead of 'kernel'.
如果您使用内核"而不是香草内核"类定义自己的vanilladot内核,则这两个代码均等效:
if you define your own vanilladot kernel with a class of 'kernel' instead of 'vanillakernel' the code will be equivalent for both:
kfunction.k <- function(){
k <- function (x,y){crossprod(x,y)}
class(k) <- "kernel"
k}
l<-0.1 ; C<-1/(2*l)
svp1 <- ksvm(x, y, type="C-svc", kernel=kfunction.k(), scaled=F)
K <- kernelMatrix(kfunction.k(),x)
svp2 <- ksvm(K, y, type="C-svc", kernel='matrix', scaled=F)
identical(nSV(svp1), nSV(svp2))
值得注意的是,由于此更改,svp1和svp2都与原始代码中的值不同.
It's worth noting that svp1 and svp2 are both different from their values in the original code because of this change.
这篇关于SVM训练之外的Kernlab内核矩阵计算的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!