Balanced_accuracy 不是 scikit-learn 中的有效评分值 [英] Balanced_accuracy is not a valid scoring value in scikit-learn

查看:76
本文介绍了Balanced_accuracy 不是 scikit-learn 中的有效评分值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

与这篇文章超级相似:

当我检查可能的得分手时:

sklearn.metrics.SCORERS.keys()dict_keys(['explained_variance', 'r2', 'max_error', 'neg_median_absolute_error', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error','neg_mean_squared_log_error', 'neg_root_deviance_meanc', 'neg_root_deviance_meanc'au-, 'roc_auc_ovr', 'roc_auc_ovo', 'roc_auc_ovr_weighted', 'roc_auc_ovo_weighted', 'balanced_accuracy', 'average_precision', 'neg_log_loss', 'neg_brier_scoreed',_s_scoreness'_scoreed',_scoreness'_scored',_scoreness'_scoreed'互信息分数', 'adjusted_mutual_info_score', 'normalized_mutual_info_score', 'fowlkes_mallows_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'recall', 'macall', 'malows_recall'_recall'_recall', 'recall_weighted', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted']

我还是找不到?问题出在哪里?

解决方案

根据 有效得分者的文档balanced_accuracy_score 评分函数"balanced_accuracy" 如我的另一个答案:

变化:

scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']

到:

scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy']

它应该可以工作.

我确实发现文档在这方面有点缺乏,而且这种删除 _score 后缀的约定也不一致,因为所有聚类指标仍然具有 _scorescoring 参数值中的名称.

super simliar to this post: ValueError: 'balanced_accuracy' is not a valid scoring value in scikit-learn

I am using:

scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']
clf = DecisionTreeClassifier(random_state=0)
scores = cross_validate(clf, X, y, scoring=scoring, cv=10, return_train_score=True)

And i receive the error:

ValueError: 'balanced_accuracy_score' is not a valid scoring value. Use sorted(sklearn.metrics.SCORERS.keys()) to get valid options.

I did the recommended solution and upgraded scikit (in the enviornment):

When I check the possible scorers:

sklearn.metrics.SCORERS.keys()
dict_keys(['explained_variance', 'r2', 'max_error', 'neg_median_absolute_error', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_root_mean_squared_error', 'neg_mean_poisson_deviance', 'neg_mean_gamma_deviance', 'accuracy', 'roc_auc', 'roc_auc_ovr', 'roc_auc_ovo', 'roc_auc_ovr_weighted', 'roc_auc_ovo_weighted', 'balanced_accuracy', 'average_precision', 'neg_log_loss', 'neg_brier_score', 'adjusted_rand_score', 'homogeneity_score', 'completeness_score', 'v_measure_score', 'mutual_info_score', 'adjusted_mutual_info_score', 'normalized_mutual_info_score', 'fowlkes_mallows_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted'])

I can stil not find it? Where is the problem?

解决方案

According to the docs for valid scorers, the value of the scoring parameter corresponding to the balanced_accuracy_score scorer function is "balanced_accuracy" as in my other answer:

Change:

scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']

to:

scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy']

and it should work.

I do find the documentation a bit lacking in this respect, and this convention of removing the _score suffix is not consistent either, as all the clustering metrics still have _score in their names in their scoring parameter values.

这篇关于Balanced_accuracy 不是 scikit-learn 中的有效评分值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆