何时在Keras Lambda层中生成一个随机数? [英] When is a random number generated in a Keras Lambda layer?

查看:31
本文介绍了何时在Keras Lambda层中生成一个随机数?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想将简单的数据扩充(输入向量乘以随机标量)应用于在Keras中实现的完全连接的神经网络.Keras具有很好的图像增强功能,但是尝试使用此功能对于我的输入(1-张量)而言似乎很笨拙且缓慢,其输入数据集适合我的计算机内存.

相反,我以为我可以使用Lambda层(例如像这样的东西:

  x =输入(shape =(10,))y = xy = Lambda(lambda z:random.uniform(0.5,1.0)* z)(y)y =密集(单位= 5,激活='relu')[y]y =密集(单位= 1,激活='sigmoid')[y]型号=型号(x,y) 

我的问题与何时生成此随机数有关.这会为以下问题修复一个随机数吗?

  • 整个培训过程?
  • 每批?
  • 每个训练数据点是什么?

解决方案

由于 random.uniform 不是keras函数,因此使用它将创建一个完全不变的常量.您在图中将此操作定义为 constant * tensor ,该因子将保持不变.

您需要来自keras"或来自tensorflow"的随机函数.例如,您可以使用 K.random_uniform((1,),0.5,1.).

这将在每批中更改.您可以通过训练许多时间的代码来对其进行测试,并观察损失的变化.

来自keras.layers import的

 从keras.models导入模型从keras.callbacks导入LambdaCallback将numpy导入为npins =输入(((1,))outs = Lambda(lambda x:K.random_uniform((1,))* x)(ins)型号=型号(进出)打印(model.predict(np.ones((1,1))))打印(model.predict(np.ones((1,1))))打印(model.predict(np.ones((1,1))))model.compile('adam','mae')model.fit(np.ones((100000,1)),np.ones((100000,1))) 

如果您希望针对每个训练样本进行更改,则获取固定的批处理大小,并为每个样本生成具有随机数的张量: K.random_uniform((batch_size,),.5,1).


如果在自己的生成器和 model.fit_generator()中进行操作,可能会获得更好的性能,

  class MyGenerator(keras.utils.Sequence):def __init __(自身,输入,输出,batchSize,minRand,maxRand):self.inputs =输入self.outputs =输出self.batchSize = batchSizeself.minRand = minRandself.maxRand = maxRand#如果你想洗牌def on_epoch_end():索引= np.array(范围(len(self.inputs)))np.random.shuffle(索引)self.inputs = self.inputs [索引]self.outputs = self.outputs [指标]def __len __(自己):leng,rem = divmod(len(self.inputs),self.batchSize)return(长+(如果rem> 0则为1否则为0))def __getitem __(self,i):开始= i * self.batchSize结束=开始+ self.batchSizex = self.inputs [开始:结束] * random.uniform(self.minRand,self.maxRand)y = self.outputs [开始:结束]返回x,y 

I would like to apply simple data augmentation (multiplication of the input vector by a random scalar) to a fully connected neural network implemented in Keras. Keras has nice functionality for image augmentation, but trying to use this seemed awkward and slow for my input (1-tensors), whose training data set fits in my computer's memory.

Instead, I imagined that I could achieve this using a Lambda layer, e.g. something like this:

x = Input(shape=(10,))
y = x
y = Lambda(lambda z: random.uniform(0.5,1.0)*z)(y)
y = Dense(units=5, activation='relu')(y)
y = Dense(units=1, activation='sigmoid')(y)
model = Model(x, y)

My question concerns when this random number will be generated. Will this fix a single random number for:

  • the entire training process?
  • each batch?
  • each training data point?

解决方案

Using this will create a constant that will not change at all, because random.uniform is not a keras function. You defined this operation in the graph as constant * tensor and the factor will be constant.

You need random functions "from keras" or "from tensorflow". For instance, you can take K.random_uniform((1,), 0.5, 1.).

This will be changed per batch. You can test it by training this code for a lot of epochs and see the loss changing.

from keras.layers import *
from keras.models import Model
from keras.callbacks import LambdaCallback

import numpy as np


ins = Input((1,))
outs = Lambda(lambda x: K.random_uniform((1,))*x)(ins)
model = Model(ins,outs)

print(model.predict(np.ones((1,1))))
print(model.predict(np.ones((1,1))))
print(model.predict(np.ones((1,1))))

model.compile('adam','mae')
model.fit(np.ones((100000,1)), np.ones((100000,1)))

If you want it to change for each training sample, then get a fixed batch size and generate a tensor with random numbers for each sample: K.random_uniform((batch_size,), .5, 1.).


You should probably get better performance if you do it in your own generator and model.fit_generator(), though:

class MyGenerator(keras.utils.Sequence):
    def __init__(self, inputs, outputs, batchSize, minRand, maxRand):
        self.inputs = inputs
        self.outputs = outputs
        self.batchSize = batchSize
        self.minRand = minRand
        self.maxRand = maxRand

    #if you want shuffling
    def on_epoch_end(self):
        indices = np.array(range(len(self.inputs)))
        np.random.shuffle(indices)
        self.inputs = self.inputs[indices]
        self.outputs = self.outputs[indices] 

    def __len__(self):
        leng,rem = divmod(len(self.inputs), self.batchSize)
        return (leng + (1 if rem > 0 else 0))

    def __getitem__(self,i):
        start = i*self.batchSize
        end = start + self.batchSize

        x = self.inputs[start:end] * random.uniform(self.minRand,self.maxRand)
        y = self.outputs[start:end]

        return x,y

这篇关于何时在Keras Lambda层中生成一个随机数?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆