原生TF与Keras TF的性能比较 [英] Native TF vs Keras TF performance comparison

查看:94
本文介绍了原生TF与Keras TF的性能比较的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我用本地和后端张量流创建了完全相同的网络,但是在使用多个不同参数进行了数小时的测试之后,仍然无法弄清为什么keras优于本地张量流并产生更好(略好)的结果.

I created the exact same network with native and backend tensorflow but after many hours of testing using number of different parameters, still couldn't figure out why keras outperforms the native tensorflow and produces better(slightly but better) results.

Keras是否实现了不同的权重初始化方法?或执行除tf.train.inverse_time_decay以外的其他权重衰减方法?

Does Keras implement a different weight initializer method? or performs different weight decay approach other than tf.train.inverse_time_decay?

P.s.分数差异总是像

P.s. the score difference is always like

Keras with Tensorflow: ~0.9850 - 0.9885 - ~45 sec. avg. training time for 1 epoch
Tensorflow Native ~0.9780 - 0.9830 - ~23 sec.

我的环境是:

Python 3.5.2 -Anaconda/Windows 10
CUDA:8.0,带有cuDNN 5.1
Keras 1.2.1
Tensorflow 0.12.1
英伟达Geforce GTX 860M

Python 3.5.2 -Anaconda / Windows 10
CUDA: 8.0 with cuDNN 5.1
Keras 1.2.1
Tensorflow 0.12.1
Nvidia Geforce GTX 860M

keras.json 文件:

{
    "image_dim_ordering": "tf", 
    "epsilon": 1e-07, 
    "floatx": "float32", 
    "backend": "tensorflow"
}

,您还可以复制并执行以下两个文件

and you can also copy and execute following two files

https://github.com/emrahyigit/deep/blob/master/keras_cnn_mnist.py
https://github.com/emrahyigit/deep/blob/master/tf_cnn_mnist.py
https://github.com/emrahyigit/deep/blob/master/mnist.py

推荐答案

问题是由于不正确使用了辍学层的keep_prob参数,因为我应该在训练和测试过程中为该参数提供不同的值.

The problem was due to incorrect use of keep_prob parameter of the dropout layer as I should have fed this parameter with different values on train and test process.

这篇关于原生TF与Keras TF的性能比较的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆