如何在scikit-learn中使用管道调整自定义内核函数的参数 [英] how to tune parameters of custom kernel function with pipeline in scikit-learn

查看:100
本文介绍了如何在scikit-learn中使用管道调整自定义内核函数的参数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前,我已经使用def函数成功定义了一个自定义内核函数(预先计算内核矩阵),现在我正在使用GridSearchCV函数来获取最佳参数.

currently I have successfully defined a custom kernel function(pre-computing the kernel matrix) using def function, and now I am using the GridSearchCV function to get the best parameters.

因此,在自定义内核功能中,总共有2个参数将被调整(在下面的示例中为gamm和sea_gamma),对于SVR模型,还有 cost c 参数也必须调整.但是到目前为止,我只能使用GridSearchCV->调整 cost c 参数,请参考下面的第一部分:示例.

so, in the custom kernel function, there is a total of 2 parameters which will be tuned (Namely gamm and sea_gamma in the example below), and also, for SVR model, the cost c parameter has to be tuned as well. But until now, I can just tune the cost c parameter using GridSearchCV -> please refer to the Part I: example below.

我已经搜索了一些类似的解决方案,例如:

I have searched for some similar solutions such as:

是是否可以通过网格搜索在scikit-learn中针对自定义内核进行参数调整?

它表示"做到这一点的一种方法是使用Pipeline,SVC(kernel ='precomputed')并将您的自定义内核函数包装为sklearn估计器(BaseEstimator和TransformerMixin的子类) 但是,这仍然与我的案例和问题有所不同,但是,我尝试基于此解决方案解决问题,但是到目前为止,它并没有输出任何输出,甚至没有任何错误. ->请参阅第二部分:管道解决方案.

it says that "One way to do this is using Pipeline, SVC(kernel='precomputed') and wrapping your custom kernel function as a sklearn estimator (a subclass of BaseEstimator and TransformerMixin))."But which is still different from my case and question, however, I tried to solve the problem based on this solution, but it didn't print any outputs so far, even any errors. -> please refer to the Part II: solutions with pipeline.

第一部分:示例->我在网格搜索中原始的自定义内核和评分方法是:

Part I: Example-> my original custom kernel and scoring method in grid search is:

    import numpy as np
    import pandas as pd
    import sklearn.svm as svm
    from sklearn import preprocessing,svm, datasets
    from sklearn.preprocessing import StandardScaler,  MaxAbsScaler
    from sklearn.metrics.pairwise import rbf_kernel
    from sklearn.grid_search import GridSearchCV
    from sklearn.svm import SVR
    from sklearn.pipeline import Pipeline
    from sklearn.metrics.scorer import make_scorer

    # weighting the vectors
    def distance_scale(X,Y):
        K = np.zeros((X.shape[0],Y.shape[0]))
        gamma_sea =192

        for i in range(X.shape[0]):
            for j in range(Y.shape[0]):
                dis = min(np.abs(X[i]-Y[j]),1-np.abs(X[i]-Y[j]))
                K[i,j] = np.exp(-gamma_sea*dis**2)
        return K

    # custom RBF kernel : kernel matrix calculation 
    def sea_rbf(X,Y):
        gam=1
        t1 = X[:, 5:6]
        t2 = Y[:, 5:6]
        X = X[:, 0:5]
        Y = Y[:, 0:5]
        d = distance_scale(t1,t2)
        return rbf_kernel(X,Y,gamma=gam)*d

    def my_custom_loss_func(y_true, y_pred):
        error=np.abs((y_true - y_pred)/y_true)
        return np.mean(error)*100

    my_scorer = make_scorer(my_custom_loss_func,greater_is_better=False)


    # Generate sample data 
    X_train=np.random.random((100,6))
    y_train=np.random.random((100,1))
    X_test=np.random.random((40,6))
    y_test=np.random.random((40,1))
    y_train=np.ravel(y_train)
    y_test=np.ravel(y_test)

    # scale the input and output in training data set, also scale the input                                         
    #in testing data set
    max_scale = preprocessing.MaxAbsScaler().fit(X_train)
    X_train_max = max_scale.transform(X_train)
    X_test_max = max_scale.transform(X_test)
    max_scale_y = preprocessing.MaxAbsScaler().fit(y_train)
    y_train_max = max_scale_y.transform(y_train)

    #precompute the kernel matrix
    gam=sea_rbf(X_train_max,X_train_max)

    #grid search for the model with the custom scoring method, but can only tune the *cost c* parameter in this case.
    clf= GridSearchCV(SVR(kernel='precomputed'),
                       scoring=my_scorer,
                       cv=5,
                       param_grid={"C": [0.1,1,2,3,4,5]
                                   })

    clf.fit(gam, y_train_max)
    print(clf.best_params_)
    print(clf.best_score_)
    print(clf.grid_scores_)

第二部分:管道解决方案

Part II:Solution with Pipeline

from __future__ import print_function
from __future__ import division

import sys

import sklearn
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline

# Wrapper class for the custom kernel RBF_kernel
class RBF2Kernel(BaseEstimator,TransformerMixin):

    def __init__(self, gamma=1,sea_gamma=20):
        super(RBF2Kernel,self).__init__()
        self.gamma = gamma
        self.sea_gamma = sea_gamma

        def fit(self, X, y=None, **fit_params):
        return self
   #calculate the kernel matrix
    def transform(self, X):
        self.a_train_ = X[:, 0:5]
        self.b_train_ = X[:, 0:5]
        self.t1_train_ = X[:, 5:6]
        self.t2_train_ = X[:, 5:6]
        sea=16
        K = np.zeros((t1.shape[0],t2.shape[0]))

        for i in range(self.t1_train_.shape[0]):
             for j in range(self.t2_train_.shape[0]):
                    dis = min(np.abs(self.t1_train_[i]*sea-        self.t2_train_[j]*sea),sea-np.abs(self.t1_train_[i]*sea-self.t2_train_[j]*sea))
                    K[i,j] = np.exp(-self.gamma_sea *dis**2)
        return K

        return rbf_kernel(self.a_train_ , self.b_train_, gamma=self.gamma)*K

def main():

    print('python: {}'.format(sys.version))
    print('numpy: {}'.format(np.__version__))
    print('sklearn: {}'.format(sklearn.__version__))

    # Generate sample data
    X_train=np.random.random((100,6))
    y_train=np.random.random((100,1))
    X_test=np.random.random((40,6))
    y_test=np.random.random((40,1))
    y_train=np.ravel(y_train)
    y_test=np.ravel(y_test)


    # Create a pipeline where our custom predefined kernel RBF2Kernel
    # is run before SVR.

    pipe = Pipeline([
        ('sc', MaxAbsScaler()),    
        ('rbf2', RBF2Kernel()),
        ('svm', SVR()),
    ])

    # Set the parameter 'gamma' of our custom kernel by
    # using the 'estimator__param' syntax.
    cv_params = dict([
        ('rbf2__gamma', 10.0**np.arange(-2,2)),
        ('rbf2__sea_gamma', 10.0**np.arange(-2,2)),
        ('svm__kernel', ['precomputed']),
        ('svm__C', 10.0**np.arange(-2,2)),
    ])

    # Do grid search to get the best parameter value of 'gamma'.
    # here i am also trying to tune the parameters of the custom kernel
    model = GridSearchCV(pipe, cv_params, verbose=1, n_jobs=-1,scoring=my_scorer)
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)

    acc_test = mean_absolute_error(y_test, y_pred)
    mape_100 =  my_custom_loss_func (y_test, y_pred)

    print("Test accuracy: {}".format(acc_test))
    print("mape_100: {}".format(mape_100))
    print("Best params:")
    print(model.best_params_)
    print(model.grid_scores_)

if __name__ == '__main__':
    main()

所以,总而言之:

  1. 该示例运行良好,但它只能调整默认参数(在这种情况下,成本参数)
  2. 我想调整自定义内核中的其他参数,该参数已在第I部分中定义为函数.
  3. 对于我来说,
  4. scikit-learn或python仍然是很新的,如果解释不清楚,如果对细节有任何疑问,请告诉我.

非常感谢您的阅读,希望较长的描述可以使您更清晰,欢迎所有建议:)

Thanks a lot for your reading, hope the long description would make you a bit more clear, all suggestions are welcomed :)

推荐答案

使用函数包装模型:

def GBC(self):
        model = GradientBoostingRegressor()
        p = [{'learning_rate':[[0.0005,0.01,0.02,0.03]],'n_estimators':[[for i in range(1,100)]],'max_depth':[[4]]}]
        return model,p

然后通过参数网格在内核中对其进行测试:

Then test it with a kernel by Parameter Grid:

def kernel(self,model,p):
        parameter = ParameterGrid(p)
        clf = GridSearchCV(model, parameter, cv=5, scoring='neg_mean_squared_error',n_jobs=2)
        clf.fit(X,Y)

使用这种方法,您可以在不同的函数上管理函数的种类及其超参数集,直接在main中调用该函数

Use this approach you can manage the kind of function and its set of hyperparameters over a distinct function, call the function directly in main

a = the_class()
a.kernel(a.GBC())

这篇关于如何在scikit-learn中使用管道调整自定义内核函数的参数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆