分类交叉熵是否需要使用categorical_accuracy或准确性作为keras中的指标? [英] Categorical crossentropy need to use categorical_accuracy or accuracy as the metrics in keras?
问题描述
我目前正在研究多类别分类.我使用分类交叉熵,使用精度作为实验指标,我得到了一个非常不错的结果.当我尝试使用categorical_accuracy时,其准确度稍差(低于1%).我的问题是,可以使用准确性度量标准来进行分类交叉熵损失,而不是使用categorical_accuracy吗?
I'm currently doing a research for multi class classification. I used categorical crossentropy and i've got a really good result using accuracy as the metrics of the experiment. When i try to use categorical_accuracy, it gives a slightly worse accuracy (1% below). My question will be, is it ok to use accuracy metrics for categorical crossentropy loss instead of the categorical_accuracy?
推荐答案
Keras检测到output_shape并自动确定在指定accuracy
时要使用的精度.对于多类别分类,categorical_accuracy
将在内部使用.从来源:
Keras detects the output_shape and automatically determines which accuracy to use when accuracy
is specified. For multi-class classification, categorical_accuracy
will be used internally. From the source:
if metric == 'accuracy' or metric == 'acc':
# custom handling of accuracy
# (because of class mode duality)
output_shape = self.internal_output_shapes[i]
acc_fn = None
if output_shape[-1] == 1 or self.loss_functions[i] == losses.binary_crossentropy:
# case: binary accuracy
acc_fn = metrics_module.binary_accuracy
elif self.loss_functions[i] == losses.sparse_categorical_crossentropy:
# case: categorical accuracy with sparse targets
acc_fn = metrics_module.sparse_categorical_accuracy
else:
acc_fn = metrics_module.categorical_accuracy
您看到的1%的差异很可能归因于每次运行之间的差异,因为除非使用相同的随机种子,否则随机梯度下降将遇到不同的最小值.
The 1% difference you are seeing can likely be attributed to run-to-run variation, as stochastic gradient descent will encounter different minima, unless the same random seed is used.
这篇关于分类交叉熵是否需要使用categorical_accuracy或准确性作为keras中的指标?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!