TensorFlowDNNClassifier 类已弃用,但替换似乎不起作用? [英] TensorFlowDNNClassifier class is deprecated but replacement does not seem to work?

查看:46
本文介绍了TensorFlowDNNClassifier 类已弃用,但替换似乎不起作用?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

在 60,000(训练)和 26,000(测试)左右的记录上使用以下 TF .9.0rc0 和 145 个编码列 (1,0) 试图预测 1 或 0 以进行类别识别..

classifier_TensorFlow = learn.TensorFlowDNNClassifier(hidden_​​units=[10, 20, 10],n_classes=2, steps=100)分类器_TensorFlow.fit(X_train, y_train.ravel())

我明白了:

警告:tensorflow:TensorFlowDNNClassifier 类已弃用.请考虑使用 DNNClassifier 作为替代方案.Out[34]:TensorFlowDNNClassifier(steps=100,batch_size=32)

然后很快就得到了很好的结果:

score = metrics.accuracy_score(y_test,classifier_TensorFlow.predict(X_test))打印('准确度:{0:f}'.格式(分数))准确度:0.923121

还有:

print (metrics.confusion_matrix(y_test, X_pred_class))[[23996 103][ 1992 15]]

但是当我尝试使用新的建议方法时:

classifier_TensorFlow = learn.DNNClassifier(hidden_​​units=[10, 20, 10],n_classes=2)

它没有完成就挂了?它不会采用步骤"参数?我没有收到任何错误消息或输出,所以没什么可继续的......有什么想法或提示吗?文档有点轻"?

解决方案

我不认为这是一个 bug,从 DNNClassifier 的源代码可以看出它的用法与 TensorFlowDNNClassifier 不同.DNNClassifier 的构造函数没有步骤参数:

def __init__(self,hidden_​​units,特征列=无,模型目录=无,n_classes=2,weight_column_name=无,优化器=无,activation_fn=nn.relu,辍学=无,配置=无)

正如你所看到的 此处.相反,DNNClassifier 从 BaseEstimator 现在有了steps 参数,注意batch_size 也会发生同样的情况:

 def fit(self, x=None, y=None, input_fn=None, steps=None, batch_size=None,监视器=无):

对于它没有完成就挂起?",在 BaseEstimator 的 fit() 方法的文档中解释说,如果 steps 是 None(作为默认值),模型将永远训练.

我仍然不明白为什么我想永远训练一个模型.我的猜测是,如果我们想提前停止验证数据,创作者认为这种方式对分类器更好,但正如我所说,这只是我的猜测.

正如你所看到的,DNNClassifier 没有给出任何反馈,因为已弃用TensorFlowDNNClassifier,假设可以使用 DNNClassifier 的构造函数中存在的配置"参数来设置反馈.所以你应该传递一个 RunConfig 对象作为配置,并且在这个对象的参数中你应该设置详细参数,不幸的是我试图设置它以便我可以看到丢失的进度,但没有那么幸运.

我建议你看一下唐元在他的博客中的最​​新帖子这里,其中之一skflow 的创造者,又名 tf learn.

Using the following with TF .9.0rc0 on 60,000 (train) and 26,000 (test) on or so records with 145 coded columns (1,0) trying to predict 1 or 0 for class identification..

classifier_TensorFlow = learn.TensorFlowDNNClassifier(hidden_units=[10, 20, 10],n_classes=2, steps=100)
classifier_TensorFlow.fit(X_train, y_train.ravel())

I get:

WARNING:tensorflow:TensorFlowDNNClassifier class is deprecated. Please consider using DNNClassifier as an alternative.
Out[34]:TensorFlowDNNClassifier(steps=100, batch_size=32)

And then good results quite fast:

score = metrics.accuracy_score(y_test,   classifier_TensorFlow.predict(X_test))
print('Accuracy: {0:f}'.format(score))
Accuracy: 0.923121

And:

print (metrics.confusion_matrix(y_test, X_pred_class))
[[23996   103]
[ 1992    15]]

But when I try to use the new suggested method:

classifier_TensorFlow = learn.DNNClassifier(hidden_units=[10, 20, 10],n_classes=2)

it hangs with no completion? it would not take the "steps" parameter? I get no error messages or output so not much to go on... Any ideas or hints? The documentation is a bit "light?"

解决方案

I don't think it is a bug, from the source code of DNNClassifier, I can tell that its usage differs from TensorFlowDNNClassifier. The constructor of DNNClassifier doesn't have the steps param:

def __init__(self,
           hidden_units,
           feature_columns=None,
           model_dir=None,
           n_classes=2,
           weight_column_name=None,
           optimizer=None,
           activation_fn=nn.relu,
           dropout=None,
           config=None)

As you could see here. Instead the fit() method that DNNClassifier inherited from BaseEstimator now has the steps param, notice that the same happens with batch_size:

  def fit(self, x=None, y=None, input_fn=None, steps=None, batch_size=None,
          monitors=None):

For the "it hangs with no completion?", in the doc of the fit() method of BaseEstimator it is explained that if steps is None (as the value by default), the model will train forever.

I still don't get why I would like to train a model forever. My guesses are that creators think this way is better for the classifier if we want to have early stopping on validation data, but as I said is only my guess.

As you could see DNNClassifier doesn't give any feedback as the deprecated TensorFlowDNNClassifier, it is supposed that the feedback can be setup with the 'config' param that is present in the constructor of DNNClassifier. So you should pass a RunConfig object as config, and in the params of this object you should set the verbose param, unfortunately I tried to set it so I can see the progress of the loss, but didn't get so lucky.

I recommend you to take a look at the latest post of Yuan Tang in his blog here, one of the creators of the skflow, aka tf learn.

这篇关于TensorFlowDNNClassifier 类已弃用,但替换似乎不起作用?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆