如何创建包含特征选择和KerasClassifier的sklearn管道?GridSearchCV期间input_dim更改的问题 [英] How to create a sklearn Pipeline that includes feature selection and KerasClassifier? Issue with input_dim changing during GridSearchCV

查看:79
本文介绍了如何创建包含特征选择和KerasClassifier的sklearn管道?GridSearchCV期间input_dim更改的问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我创建了一个sklearn管道,该管道使用SelectPercentile(f_classif)进行特征选择,并通过管道输入到KerasClassifier中.用于SelectPercentile的百分位数是网格搜索中的超参数.这意味着输入尺寸将在gridsearch期间变化,并且我未能成功设置KerasClassifier的input_dim以使其相应地适应此参数.

I have created a sklearn Pipeline that uses SelectPercentile(f_classif) for feature selection piped into a KerasClassifier. The percentile used for SelectPercentile is a hyperparameter in grid search. This means the input dimensions will vary during gridsearch and I have been unsuccessful setting the input_dim of the KerasClassifier to adapt to this parameter accordingly.

我不认为有一种方法可以访问在sklearn的GridSearchCV中的KerasClassifier中通过管道传递的缩减数据维度.也许有一种方法可以在管道中的SelectPercentile和KerasClassifier之间共享单个超级参数(以便百分比超级参数可以确定input_dim)?我想可能的解决方案是构建一个自定义分类器,将管道中的两个步骤包装到一个步骤中,以便可以共享百分位数超参数.

I don't think a way to access the reduced data dimension being piped in the the KerasClassifier within sklearn's GridSearchCV. Maybe there's a way to have a single hyperparmeter that is shared between SelectPercentile and KerasClassifier in Pipeline (so that the percentile hyperpameter can determine input_dim)? I suppose a possible solution could be to build a custom classifier that wraps the two steps in the pipeline into a single step so that the percentile hyperparameter can be shared.

到目前为止,在模型拟合过程中,该错误始终产生"ValueError:检查输入时出错:预期density_1_input具有形状(112,)但形状为(23,)的数组"的变化.

So far the error consistently produces variations of "ValueError: Error when checking input: expected dense_1_input to have shape (112,) but got array with shape (23,)" during model fitting.

def create_baseline(input_dim=10, init='normal', activation_1='relu', activation_2='relu', optimizer='SGD'):
    # Create model
    model = Sequential()
    model.add(Dense(50, input_dim=np.shape(X_train)[1], kernel_initializer=init, activation=activation_1))
    model.add(Dense(25, kernel_initializer=init, activation=activation_2))
    model.add(Dense(1, kernel_initializer=init, activation='sigmoid'))
    # Compile model
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=["accuracy"])
    return model

tuned_parameters = dict(
                            anova__percentile = [20, 40, 60, 80],
                            NN__optimizer = ['SGD', 'Adam'],
                            NN__init = ['glorot_normal', 'glorot_uniform'],
                            NN__activation_1 = ['relu', 'sigmoid'],
                            NN__activation_2 = ['relu', 'sigmoid'],
                            NN__batch_size = [32, 64, 128, 256]
                        )

kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=2)
for train_indices, test_indices in kfold.split(data, labels):
    # Split data
    X_train = [data[idx] for idx in train_indices]
    y_train = [labels[idx] for idx in train_indices]
    X_test = [data[idx] for idx in test_indices]
    y_test = [labels[idx] for idx in test_indices]

    # Pipe feature selection and classifier together
    anova = SelectPercentile(f_classif)
    NN = KerasClassifier(build_fn=create_baseline, epochs=1000, verbose=0)
    clf = Pipeline([('anova', anova), ('NN', NN)])      

    # Train model
    clf = GridSearchCV(clf, tuned_parameters, scoring='balanced_accuracy', n_jobs=-1, cv=kfold)
    clf.fit(X_train, y_train)
    # Test model
    y_true, y_pred = y_test, clf.predict(X_test)

推荐答案

对我有用的另一种解决方案是从 KerasClassifier 继承并设置 input_dim set_params (文档 super().fit(X,y)之前,先在fit函数中选择a>).这正在使用scikit-learn 0.24.0和keras 2.4.3.

One alternative solution, which worked for me, is to inherit from KerasClassifier and set the input_dim with set_params (documentation) in the fit function, before calling super().fit(X, y). This is working with scikit-learn 0.24.0 and keras 2.4.3.

以下是完整示例:

首先继承类.这是通常必须添加到常规用法中的内容:

First the inheriting class. This is what mainly has to be added to a normal usage:

from keras.wrappers.scikit_learn import KerasClassifier

class InputDimPredictingKerasClassifier(KerasClassifier):
    def fit(self, X, y):
        super().set_params(**{"input_dim": X.shape[1]})
        return super().fit(X, y)

正常使用,然后使用类 InputDimPredictingKerasClassifier 构建模型:

The normal use, with which the model is then build using the class InputDimPredictingKerasClassifier:

import keras
from keras.layers import Dense
from keras.models import Sequential


def build_mlp(
    input_dim: int=23, # just a default value
    output_dim: int=6, 
) -> KerasClassifier:
    model = Sequential()
    model.add(keras.Input(shape=(input_dim,)))
    model.add(Dense(11, activation="relu"))
    model.add(Dense(output_dim, activation="softmax"))
    model.compile(loss="categorical_crossentropy", optimizer="adam")
    return model


def get_mlp(num_of_classes: int) -> InputDimPredictingKerasClassifier:
    model = InputDimPredictingKerasClassifier(
        build_fn=build_mlp,
        output_dim=num_of_classes,
    )
    return model

这篇关于如何创建包含特征选择和KerasClassifier的sklearn管道?GridSearchCV期间input_dim更改的问题的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆