Balanced_accuracy 不是 scikit-learn 中的有效评分值 [英] Balanced_accuracy is not a valid scoring value in scikit-learn
问题描述
与这篇文章超级相似:
当我检查可能的得分手时:
sklearn.metrics.SCORERS.keys()dict_keys(['explained_variance', 'r2', 'max_error', 'neg_median_absolute_error', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error','neg_mean_squared_log_error', 'neg_root_deviance_meanc', 'neg_root_deviance_meanc'au-, 'roc_auc_ovr', 'roc_auc_ovo', 'roc_auc_ovr_weighted', 'roc_auc_ovo_weighted', 'balanced_accuracy', 'average_precision', 'neg_log_loss', 'neg_brier_scoreed',_s_scoreness'_scoreed',_scoreness'_scored',_scoreness'_scoreed'互信息分数', 'adjusted_mutual_info_score', 'normalized_mutual_info_score', 'fowlkes_mallows_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'recall', 'macall', 'malows_recall'_recall'_recall', 'recall_weighted', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted']
我还是找不到?问题出在哪里?
根据 有效得分者的文档,balanced_accuracy_score
评分函数 是 "balanced_accuracy"
如我的另一个答案:
变化:
scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']
到:
scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy']
它应该可以工作.
我确实发现文档在这方面有点缺乏,而且这种删除 _score
后缀的约定也不一致,因为所有聚类指标仍然具有 _score
scoring
参数值中的名称.
super simliar to this post: ValueError: 'balanced_accuracy' is not a valid scoring value in scikit-learn
I am using:
scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']
clf = DecisionTreeClassifier(random_state=0)
scores = cross_validate(clf, X, y, scoring=scoring, cv=10, return_train_score=True)
And i receive the error:
ValueError: 'balanced_accuracy_score' is not a valid scoring value. Use sorted(sklearn.metrics.SCORERS.keys()) to get valid options.
I did the recommended solution and upgraded scikit (in the enviornment):
When I check the possible scorers:
sklearn.metrics.SCORERS.keys()
dict_keys(['explained_variance', 'r2', 'max_error', 'neg_median_absolute_error', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_root_mean_squared_error', 'neg_mean_poisson_deviance', 'neg_mean_gamma_deviance', 'accuracy', 'roc_auc', 'roc_auc_ovr', 'roc_auc_ovo', 'roc_auc_ovr_weighted', 'roc_auc_ovo_weighted', 'balanced_accuracy', 'average_precision', 'neg_log_loss', 'neg_brier_score', 'adjusted_rand_score', 'homogeneity_score', 'completeness_score', 'v_measure_score', 'mutual_info_score', 'adjusted_mutual_info_score', 'normalized_mutual_info_score', 'fowlkes_mallows_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted'])
I can stil not find it? Where is the problem?
According to the docs for valid scorers, the value of the scoring
parameter corresponding to the balanced_accuracy_score
scorer function is "balanced_accuracy"
as in my other answer:
Change:
scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy_score']
to:
scoring = ['precision_macro', 'recall_macro', 'balanced_accuracy']
and it should work.
I do find the documentation a bit lacking in this respect, and this convention of removing the _score
suffix is not consistent either, as all the clustering metrics still have _score
in their names in their scoring
parameter values.
这篇关于Balanced_accuracy 不是 scikit-learn 中的有效评分值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!