在pycham中使用for循环时,纪元时间增加 [英] Epoch time increases when using for loop in pycham

查看:101
本文介绍了在pycham中使用for循环时,纪元时间增加的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

网络大小增加不是原因(问题)

The increase in network size is not the cause(problem)

这是我的代码

for i in [32, 64, 128, 256, 512]:
    for j in [32, 64, 128, 256, 512]:
        for k in [32, 64, 128, 256, 512]:
            for l in [0.1, 0.2, 0.3, 0.4, 0.5]:

                model = Sequential()
                model.add(Dense(i))
                model.add(Dropout(l))

                model.add(Dense(j))
                model.add(Dropout(l))

                model.add(Dense(k))
                model.add(Dropout(l))

                model.compile(~)

                hist = model.fit(~)

                plt.savefig(str(count) + '.png')
                plt.clf()

                f = open(str(count) + '.csv', 'w')
                text = ~
                f.write(text)
                f.close()
                count+=1
                print()
                print("count :" + str(count))
                print()

我开始将count设置为0

count为460〜479的时间是

when count is 460~ 479 the epoch time is

Train on 7228 samples, validate on 433 samples
Epoch 1/10
 - 2254s - loss: 0.0045 - acc: 1.3835e-04 - val_loss: 0.0019 - val_acc: 0.0000e+00
Epoch 2/10
 - 86s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0030 - val_acc: 0.0000e+00
Epoch 3/10
 - 85s - loss: 0.0017 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00
Epoch 4/10
 - 86s - loss: 0.0015 - acc: 1.3835e-04 - val_loss: 1.6094e-04 - val_acc: 0.0000e+00
Epoch 5/10
 - 86s - loss: 0.0014 - acc: 1.3835e-04 - val_loss: 1.4120e-04 - val_acc: 0.0000e+00
Epoch 6/10
 - 85s - loss: 0.0013 - acc: 1.3835e-04 - val_loss: 3.8155e-04 - val_acc: 0.0000e+00
Epoch 7/10
 - 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.1694e-04 - val_acc: 0.0000e+00
Epoch 8/10
 - 85s - loss: 0.0012 - acc: 1.3835e-04 - val_loss: 4.8163e-04 - val_acc: 0.0000e+00
Epoch 9/10
 - 86s - loss: 0.0011 - acc: 1.3835e-04 - val_loss: 3.8670e-04 - val_acc: 0.0000e+00
Epoch 10/10
 - 85s - loss: 9.9018e-04 - acc: 1.3835e-04 - val_loss: 0.0016 - val_acc: 0.0000e+00

但是当我重新启动pycharm时,count是480

but when I restart pycharm and count is 480

时期是

Train on 7228 samples, validate on 433 samples
Epoch 1/10
 - 151s - loss: 0.0071 - acc: 1.3835e-04 - val_loss: 0.0018 - val_acc: 0.0000e+00
Epoch 2/10
 - 31s - loss: 0.0038 - acc: 1.3835e-04 - val_loss: 0.0014 - val_acc: 0.0000e+00
Epoch 3/10
 - 32s - loss: 0.0031 - acc: 1.3835e-04 - val_loss: 2.0248e-04 - val_acc: 0.0000e+00
Epoch 4/10
 - 32s - loss: 0.0026 - acc: 1.3835e-04 - val_loss: 3.7600e-04 - val_acc: 0.0000e+00
Epoch 5/10
 - 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 4.3882e-04 - val_acc: 0.0000e+00
Epoch 6/10
 - 32s - loss: 0.0020 - acc: 1.3835e-04 - val_loss: 0.0037 - val_acc: 0.0000e+00
Epoch 7/10
 - 32s - loss: 0.0021 - acc: 1.3835e-04 - val_loss: 1.2072e-04 - val_acc: 0.0000e+00
Epoch 8/10
 - 32s - loss: 0.0019 - acc: 1.3835e-04 - val_loss: 0.0031 - val_acc: 0.0000e+00
Epoch 9/10
 - 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 0.0051 - val_acc: 0.0000e+00
Epoch 10/10
 - 33s - loss: 0.0018 - acc: 1.3835e-04 - val_loss: 3.2728e-04 - val_acc: 0.0000e+00

我刚刚再次开始,但是纪元时间更快了.

I just started it again, but the epoch time was faster.

我不知道为什么会这样.

I don't know why this happened.

在Python 3.6版本中, 我使用tensorflow-gpu 1.13.1版本,并且 Cuda使用10.0版本. 操作系统是Windows 10 1903专业版,并且 操作系统内部版本使用18362.239 Pycharm使用2019.1.1社区版本.

In the Python 3.6 version, I use tensorflow-gpu 1.13.1 version, and Cuda uses 10.0 version. OS is a Windows 10 1903 pro version and OS build uses 18362.239 Pycharm uses a 2019.1.1 community version.

我只是使用了for循环,我想知道为什么会这样.

I just used the for loop, and I wonder why this happened.

我更改了for循环中的单位数.

I changed the number of units in the for loop.

我还用plt.savefig保存了图形,并以.csv格式保存了数据.

I also saved the figure with a plt.savefig, and saved the data in .csv format.

我还问如何解决.

推荐答案

您应使用:

from keras import backend as K`
K.clear_session()

在创建模型之前(即model=Sequential()).那是因为:

before creating the model (i.e. model=Sequential()). That's because:

操作不是TF垃圾收集的,因此您总是向图添加更多节点.

Ops are not garbage collected by TF so you always add more node to the graph.

因此,如果我们不使用K.clear_session,则会发生内存泄漏.

So if we don't use K.clear_session, then memory leak occurs.

感谢Slack中 keras.io 的@ dref360.

Thanks to @dref360 at keras.io in Slack.

这篇关于在pycham中使用for循环时,纪元时间增加的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆