如何为GridSearchCV提供交叉验证的索引列表? [英] How to give GridSearchCV a list of indicies for cross-validation?

查看:74
本文介绍了如何为GridSearchCV提供交叉验证的索引列表?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试针对非常特定的数据集使用自定义交叉验证集,并尝试使用BayesSearchCV使用scikit-optimize.我已经能够使用GridSearchCVscikit-learn复制错误.

I'm trying to use custom cross-validation sets for a very specific dataset and scikit-optimize using BayesSearchCV. I've been able to replicate the error with scikit-learn using GridSearchCV.

直接从文档:

Straight from the documentation:

cv:int,交叉验证生成器或可迭代的,可选的

cv : int, cross-validation generator or an iterable, optional

确定交叉验证拆分策略.可能的输入 对于简历是:

Determines the cross-validation splitting strategy. Possible inputs for cv are:

无,使用默认的三折交叉验证整数来指定 (分层)KFold中的折叠次数,该对象用作 交叉验证生成器.可迭代的屈服火车,测试分裂. 对于整数/无输入,如果估计量是分类器,y是 无论是二进制还是多类,都使用StratifiedKFold.在其他所有方面 情况下,使用KFold.

None, to use the default 3-fold cross validation, integer, to specify the number of folds in a (Stratified)KFold, An object to be used as a cross-validation generator. An iterable yielding train, test splits. For integer/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used.

有关各种交叉验证策略的参考用户指南, 在这里使用.

Refer User Guide for the various cross-validation strategies that can be used here.

我无法在特定数据集中使用cv=10.这只是为了说明错误.

I can't use cv=10 in my specific dataset. This is only to illustrate the error.

我想使用列表的列表进行交叉验证训练-测试拆分,如文档中所述.如何正确设置交叉验证列表的格式?

# Generate data
def iris_data(noise=None, palette="hls", desat=1):
    # Iris dataset
    X = pd.DataFrame(load_iris().data,
                     index = [*map(lambda x:f"iris_{x}", range(150))],
                     columns = [*map(lambda x: x.split(" (cm)")[0].replace(" ","_"), load_iris().feature_names)])

    y = pd.Series(load_iris().target,
                           index = X.index,
                           name = "Species")
    cmap = map_colors(y, mode=1, palette=palette, desat=desat)#y.map(lambda x:{0:"red",1:"green",2:"blue"}[x])

    if noise is not None:
        X_noise = pd.DataFrame(
            np.random.RandomState(0).normal(size=(X.shape[0], noise)),
            index=X_iris.index,
            columns=[*map(lambda x:f"noise_{x}", range(noise))]
        )
        X = pd.concat([X, X_noise], axis=1)
    return (X, y, cmap)

X, y, c = iris_data(noise=50)

# Get cross-validations
cv = list()
for i in range(10):
    idx_tr = np.random.choice(np.arange(X.shape[0]),size=100, replace=False)
    idx_te = set(range(X.shape[0])) - set(idx_tr)
    tr_te_splits = [idx_tr.tolist(), list(idx_te)]
    cv.append(tr_te_splits)

# Get hyperparameter searchspace
search_spaces = {
    "n_estimators": [1,10,50],
    "criterion": ["gini", "entropy"],
    "max_features": ["sqrt", "log2", None],
    "min_samples_leaf": [1,2,3,5,8,13],

}
opt = GridSearchCV(RandomForestClassifier(random_state=0), search_spaces, scoring="accuracy", n_jobs=1, cv=cv)
opt.fit(X,y)

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-26-d1117d10dfa6> in <module>()
     59 }
     60 opt = GridSearchCV(RandomForestClassifier(random_state=0), search_spaces, scoring="accuracy", n_jobs=1, cv=cv)
---> 61 opt.fit(X,y)

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/model_selection/_search.py in fit(self, X, y, groups, **fit_params)
    637                                   error_score=self.error_score)
    638           for parameters, (train, test) in product(candidate_params,
--> 639                                                    cv.split(X, y, groups)))
    640 
    641         # if one choose to see train score, "out" will contain train score info

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable)
    777             # was dispatched. In particular this covers the edge
    778             # case of Parallel used with an exhausted iterator.
--> 779             while self.dispatch_one_batch(iterator):
    780                 self._iterating = True
    781             else:

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py in dispatch_one_batch(self, iterator)
    623                 return False
    624             else:
--> 625                 self._dispatch(tasks)
    626                 return True
    627 

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py in _dispatch(self, batch)
    586         dispatch_timestamp = time.time()
    587         cb = BatchCompletionCallBack(dispatch_timestamp, len(batch), self)
--> 588         job = self._backend.apply_async(batch, callback=cb)
    589         self._jobs.append(job)
    590 

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py in apply_async(self, func, callback)
    109     def apply_async(self, func, callback=None):
    110         """Schedule a func to be run"""
--> 111         result = ImmediateResult(func)
    112         if callback:
    113             callback(result)

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py in __init__(self, batch)
    330         # Don't delay the application, to avoid keeping the input
    331         # arguments in memory
--> 332         self.results = batch()
    333 
    334     def get(self):

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py in __call__(self)
    129 
    130     def __call__(self):
--> 131         return [func(*args, **kwargs) for func, args, kwargs in self.items]
    132 
    133     def __len__(self):

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py in <listcomp>(.0)
    129 
    130     def __call__(self):
--> 131         return [func(*args, **kwargs) for func, args, kwargs in self.items]
    132 
    133     def __len__(self):

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py in _fit_and_score(estimator, X, y, scorer, train, test, verbose, parameters, fit_params, return_train_score, return_parameters, return_n_test_samples, return_times, error_score)
    446     start_time = time.time()
    447 
--> 448     X_train, y_train = _safe_split(estimator, X, y, train)
    449     X_test, y_test = _safe_split(estimator, X, y, test, train)
    450 

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/utils/metaestimators.py in _safe_split(estimator, X, y, indices, train_indices)
    198             X_subset = X[np.ix_(indices, train_indices)]
    199     else:
--> 200         X_subset = safe_indexing(X, indices)
    201 
    202     if y is not None:

~/anaconda/envs/python3/lib/python3.6/site-packages/sklearn/utils/__init__.py in safe_indexing(X, indices)
    144     if hasattr(X, "iloc"):
    145         # Work-around for indexing with read-only indices in pandas
--> 146         indices = indices if indices.flags.writeable else indices.copy()
    147         # Pandas Dataframes and Series
    148         try:

AttributeError: 'list' object has no attribute 'flags'

)

推荐答案

由于输入对象Xypandas,因此我认为它们需要命名的标记.如果我通过.values方法将它们转换为numpy,那么它将起作用.如果您这样做,您只需要确保订单正确即可.

Since the input objects X and y are pandas they require named indicies I believe. If I convert them to numpy via .values method then it works. You just need to make sure the orders are correct if you do it this way.

这篇关于如何为GridSearchCV提供交叉验证的索引列表?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆