如何解释h2o.predict的结果 [英] How to interpret results of h2o.predict

查看:169
本文介绍了如何解释h2o.predict的结果的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在运行h2o.deeplearning以解决二进制分类问题之后,我随后运行h2o.predict并获得以下结果

After running h2o.deeplearning for a binary classification problem I then run the h2o.predict and obtain the following results

  predict        No       Yes
1      No 0.9784425 0.0215575
2     Yes 0.4667428 0.5332572
3     Yes 0.3955087 0.6044913
4     Yes 0.7962034 0.2037966
5     Yes 0.7413591 0.2586409
6     Yes 0.6800801 0.3199199

我希望得到只有两行的混淆矩阵.但这似乎是完全不同的.如何解释这些结果?有什么方法可以使混淆矩阵具有实际值和预测值以及错误率吗?

I was hoping to get a confusion matrix with only two rows. But this seems to be quite different. How do I interpret these results? Is there any way of getting something like a confusion matrix with actual and predicted values and error percentage?

推荐答案

您可以从模型拟合中提取该信息(例如,如果传递validation_frame),也可以使用h2o.performance()获取一个H2OBinomialModel性能对象,并使用h2o.confusionMatrix()提取混淆矩阵.

You can either extract that information from the model fit (for example, if you pass a validation_frame), or you can use h2o.performance() to get obtain a H2OBinomialModel performance object and extract the confusion matrix using h2o.confusionMatrix().

示例:

fit <- h2o.deeplearning(x, y, training_frame = train, validation_frame = valid, ...)
h2o.confusionMatrix(fit, valid = TRUE)

fit <- h2o.deeplearning(x, y, train, ...)
perf <- h2o.performance(fit, test)
h2o.confusionMatrix(perf)

这篇关于如何解释h2o.predict的结果的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆