张量板和辍学层 [英] Tensorboard and Dropout Layers

查看:99
本文介绍了张量板和辍学层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个非常基本的查询.我制作了4个几乎完全相同(区别在于输入形状)的CNN,并在连接到完全连接的层的前馈网络时合并了它们.

I have a very basic query. I have made 4 almost identical(Difference being input shapes) CNN and have merged them while connecting to a Feed Forward Network of fully connected layers.

几乎相同的CNN的代码:

Code for the almost identical CNN(s):

model3 = Sequential()
model3.add(Convolution2D(32, (3, 3), activation='relu', padding='same', 
                                     input_shape=(batch_size[3], seq_len, channels)))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Dropout(0.1))
model3.add(Convolution2D(64, (3, 3), activation='relu', padding='same'))
model3.add(MaxPooling2D(pool_size=(2, 2)))
model3.add(Flatten())

但是在张量板上,我看到所有Dropout层都是相互连接的,并且Dropout1的颜色与Dropout2、3、4等都是相同的颜色.

But on tensorboard I see all the Dropout layers are interconnected, and Dropout1 is of different color than Dropout2,3,4,etc which all are the same color.

推荐答案

我知道这是一个老问题,但是我自己也遇到了同样的问题,而现在我才意识到发生了什么

I know this is an old question but I had the same issue myself and just now I realized what's going on

仅当我们正在训练模型时才应用降落.在我们进行评估/预测时,应将其停用.为此,keras创建一个learning_phase占位符,如果我们正在训练模型,则将其设置为1.0. 此占位符在您创建的第一个Dropout层内创建,并在所有它们之间共享.这就是您在那看到的东西!

Dropout is only applied if we're training the model. This should be deactivated by the time we're evaluating/predicting. For that purpose, keras creates a learning_phase placeholder, set to 1.0 if we're training the model. This placeholder is created inside the first Dropout layer you create and is shared across all of them. So that's what you're seeing there!

这篇关于张量板和辍学层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆