h2o vs scikit学习混淆矩阵 [英] h2o vs scikit learn confusion matrix

查看:120
本文介绍了h2o vs scikit学习混淆矩阵的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

任何人都可以将sklearn混淆矩阵与水相匹配吗?

Anyone able to match the sklearn confusion matrix to h2o?

他们从不匹配....

用Keras做类似的事情会产生完美的匹配.

Doing something similar with Keras produces a perfect match.

但是在h2o中,它们始终处于关闭状态.尝试了每种方式...

But in h2o they are always off. Tried it every which way...

从以下位置借用了一些代码: H2O和Scikit-Learn指标得分之间有何区别?

Borrowed some code from: Any difference between H2O and Scikit-Learn metrics scoring?

# In[30]:
import pandas as pd
import h2o
from h2o.estimators.gbm import H2OGradientBoostingEstimator
h2o.init()

# Import a sample binary outcome train/test set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")

# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)

# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()

# Train and cross-validate a GBM
model = H2OGradientBoostingEstimator(distribution="bernoulli", seed=1)
model.train(x=x, y=y, training_frame=train)

# In[31]:
# Test AUC
model.model_performance(test).auc()
# 0.7817203808052897

# In[32]:

# Generate predictions on a test set
pred = model.predict(test)

# In[33]:

from sklearn.metrics import roc_auc_score, confusion_matrix

pred_df = pred.as_data_frame()
y_true = test[y].as_data_frame()

roc_auc_score(y_true, pred_df['p1'].tolist())
#pred_df.head()

# In[36]:

y_true = test[y].as_data_frame().values
cm = pd.DataFrame(confusion_matrix(y_true, pred_df['predict'].values))

# In[37]:

print(cm)
    0     1
0  1354   961
1   540  2145

# In[38]:
model.model_performance(test).confusion_matrix()

Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.353664307031828: 

    0         1     Error   Rate
0   964.0   1351.0  0.5836  (1351.0/2315.0)
1   274.0   2411.0  0.102   (274.0/2685.0)
Total   1238.0  3762.0  0.325   (1625.0/5000.0)

# In[39]:
h2o.cluster().shutdown()

推荐答案

我也遇到同样的问题.这是我要做的一个公平的比较:

I also meet the same issue. Here is what I would do to make a fair comparison:

model.train(x=x, y=y, training_frame=train, validation_frame=test)
cm1 = model.confusion_matrix(metrics=['F1'], valid=True)

由于我们使用训练数据和验证数据训练模型,因此 pred ['predict'] 将使用

Since we train the model using training data and validation data, then the pred['predict'] will use the threshold which maximizes the F1 score of validation data. To make sure, one can use these lines:

threshold = perf.find_threshold_by_max_metric(metric='F1', valid=True)
pred_df['predict'] = pred_df['p1'].apply(lambda x: 0 if x < threshold else 1)

要从scikit中获得另一个混淆矩阵,请学习:

To get another confusion matrix from scikit learn:

from sklearn.metrics import confusion_matrix

cm2 = confusion_matrix(y_true, pred_df['predict'])

就我而言,我不明白为什么我得到的结果略有不同.例如:

In my case, I don't understand why I get slightly different results. Something like, for example:

print(cm1)
>> [[3063  176]
    [  94  146]]

print(cm2)
>> [[3063  176]
    [  95  145]]

这篇关于h2o vs scikit学习混淆矩阵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆