Keras中的GaussianDropout vs.Dropout vs.GaussianNoise [英] GaussianDropout vs. Dropout vs. GaussianNoise in Keras

查看:265
本文介绍了Keras中的GaussianDropout vs.Dropout vs.GaussianNoise的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

谁能解释不同辍学风格之间的区别?从文档中,我认为不是删除一些单元为零(丢失),GaussianDropout将这些单位乘以某种分布.但是,在实际测试中,所有单元都被触及.结果看起来更像是经典的高斯噪声.

Can anyone explain the difference between the different dropout styles? From the documentation, I assumed that instead of dropping some units to zero (dropout), GaussianDropout multiplies those units by some distribution. However, when testing in practice, all units are touched. The result looks more like the classic GaussianNoise.

tf.random.set_seed(0)
layer = tf.keras.layers.GaussianDropout(.05, input_shape=(2,))
data = np.arange(10).reshape(5, 2).astype(np.float32)
print(data)

outputs = layer(data, training=True)
print(outputs)

结果:

[[0. 1.]
 [2. 3.]
 [4. 5.]
 [6. 7.]
 [8. 9.]]
tf.Tensor(
[[0.    1.399]
 [1.771 2.533]
 [4.759 3.973]
 [5.562 5.94 ]
 [8.882 9.891]], shape=(5, 2), dtype=float32)

显然,这就是我一直想要的:

Apparently, this is what I wanted all along:

def RealGaussianDropout(x, rate, stddev):

    keep_prob = 1 - rate
    random_tensor = tf.random.uniform(tf.shape(x))
    keep_mask = tf.cast(random_tensor >= rate, tf.float32)   
    noised = x + K.random_normal(tf.shape(x), mean=.0, stddev=stddev)   
    ret = tf.multiply(x, keep_mask) + tf.multiply(noised, (1-keep_mask))

    return ret


outputs = RealGaussianDropout(data,0.2,0.1)
print(outputs)

推荐答案

您是对的... GaussianDropout和GaussianNoise非常相似.您可以通过自己复制它们来测试所有相似性

you are right... GaussianDropout and GaussianNoise are very similar. you can test all the similarities by reproducing them on your own

def dropout(x, rate):

    keep_prob = 1 - rate
    scale = 1 / keep_prob
    ret = tf.multiply(x, scale)
    random_tensor = tf.random.uniform(tf.shape(x))
    keep_mask = random_tensor >= rate
    ret = tf.multiply(ret, tf.cast(keep_mask, tf.float32))
    
    return ret

def gaussian_dropout(x, rate):
    
    stddev = np.sqrt(rate / (1.0 - rate))
    ret = x * K.random_normal(tf.shape(x), mean=1.0, stddev=stddev)
    
    return ret

def gaussian_noise(x, stddev):
    
    ret = x + K.random_normal(tf.shape(x), mean=.0, stddev=stddev)
    
    return ret

高斯噪声简单地将平均值为0的随机法线值相加,而高斯衰减则简单地将平均值为1的随机法线值相乘.这些操作涉及输入的所有元素.经典的Dropout将某些输入元素按比例缩放为0

Gaussian noise simply adds random normal values with 0 mean while gaussian dropout simply multiplies random normal values with 1 mean. These operations involve all the elements of the input. The classic dropout turn to 0 some input elements operating a scaling on the others

DROPOUT

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.Dropout(.4)
out1 = layer(data, training=True)
set_seed(0)
out2 = dropout(data, .4)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE

GAUSSIANDROPOUT

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.GaussianDropout(.05)
out1 = layer(data, training=True)
set_seed(0)
out2 = gaussian_dropout(data, .05)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE

GAUSSIANNOISE

data = np.arange(10).reshape(5, 2).astype(np.float32)

set_seed(0)
layer = tf.keras.layers.GaussianNoise(.3)
out1 = layer(data, training=True)
set_seed(0)
out2 = gaussian_noise(data, .3)
print(tf.reduce_all(out1 == out2).numpy()) # TRUE

为了授予可重复性,我们使用了(TF2):

to grant reproducibility we used (TF2):

def set_seed(seed):
    
    tf.random.set_seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    np.random.seed(seed)
    random.seed(seed)

这篇关于Keras中的GaussianDropout vs.Dropout vs.GaussianNoise的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆