限制Keras中图层的输出值 [英] Restricting the output values of layers in Keras

查看:126
本文介绍了限制Keras中图层的输出值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在下面的代码中定义了我的MLP.我想提取layer_2的值.

def gater(self):
    dim_inputs_data = Input(shape=(self.train_dim[1],))
    dim_svm_yhat = Input(shape=(3,))
    layer_1 = Dense(20,
                    activation='sigmoid')(dim_inputs_data)
    layer_2 = Dense(3, name='layer_op_2',
                    activation='sigmoid', use_bias=False)(layer_1)
    layer_3 = Dot(1)([layer_2, dim_svm_yhat])
    out_layer = Dense(1, activation='tanh')(layer_3)
    model = Model(input=[dim_inputs_data, dim_svm_yhat], output=out_layer)
    adam = optimizers.Adam(lr=0.01)
    model.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
    return model

假设layer_2的输出以矩阵形式位于下面

0.1 0.7 0.8
0.1 0.8 0.2
0.1 0.5 0.5
....

我希望将下面的内容输入到layer_3中,而不是上面的

0 0 1
0 1 0
0 1 0

基本上,我希望将第一个最大值转换为1,将其他最大值转换为0. 如何在喀拉拉邦实现呢?.

解决方案

谁确定输出值的范围?

神经网络中任何层的输出范围取决于该层使用的激活函数.例如,如果将tanh用作激活函数,则输出值将限制为[-1,1](并且这些值是连续的,请检查如何将这些值从[-inf,+inf](在x轴上输入)映射到[-1,+1](在y轴上的输出)此处,了解这一步骤非常重要)

您应该做的是添加一个自定义激活函数,该函数将您的值限制为阶跃函数,即[-inf,+ inf]的值为1或0,并将其应用于该层.

我怎么知道要使用哪个功能?

您需要创建满足所有需求(输入到输出映射)的y=some_function并将其转换为Python代码,就像这样:

from keras import backend as K

def binaryActivationFromTanh(x, threshold) :

    # convert [-inf,+inf] to [-1, 1]
    # you can skip this step if your threshold is actually within [-inf, +inf]

    activated_x = K.tanh(x)

    binary_activated_x = activated_x > threshold

    # cast the boolean array to float or int as necessary
    # you shall also cast it to Keras default
    # binary_activated_x = K.cast_to_floatx(binary_activated_x)

    return binary_activated_x

完成自定义激活功能后,您可以像使用它

x = Input(shape=(1000,))
y = Dense(10, activation=binaryActivationFromTanh)(x)

现在测试这些值,看看是否获得了预期的值.您现在可以将其放入更大的神经网络中.

强烈 不鼓励添加新层以限制输出,除非它仅用于激活(例如keras.layers.LeakyReLU).

I have defined my MLP in the code below. I want to extract the values of layer_2.

def gater(self):
    dim_inputs_data = Input(shape=(self.train_dim[1],))
    dim_svm_yhat = Input(shape=(3,))
    layer_1 = Dense(20,
                    activation='sigmoid')(dim_inputs_data)
    layer_2 = Dense(3, name='layer_op_2',
                    activation='sigmoid', use_bias=False)(layer_1)
    layer_3 = Dot(1)([layer_2, dim_svm_yhat])
    out_layer = Dense(1, activation='tanh')(layer_3)
    model = Model(input=[dim_inputs_data, dim_svm_yhat], output=out_layer)
    adam = optimizers.Adam(lr=0.01)
    model.compile(loss='mse', optimizer=adam, metrics=['accuracy'])
    return model

Suppose the output of layer_2 is below in matrix form

0.1 0.7 0.8
0.1 0.8 0.2
0.1 0.5 0.5
....

I would like below to be fed into layer_3 instead of above

0 0 1
0 1 0
0 1 0

Basically, I want the first maximum values to be converted to 1 and other to 0. How can this be achieved in keras?.

解决方案

Who decides the range of output values?

Output range of any layer in a neural network is decided by the activation function used for that layer. For example, if you use tanh as your activation function, your output values will be restricted to [-1,1] (and the values are continuous, check how the values get mapped from [-inf,+inf] (input on x-axis) to [-1,+1] (output on y-axis) here, understanding this step is very important)

What you should be doing is add a custom activation function that restricts your values to a step function i.e., either 1 or 0 for [-inf, +inf] and apply it to that layer.

How do I know which function to use?

You need to create y=some_function that satisfies all your needs (the input to output mapping) and convert that to Python code just like this one:

from keras import backend as K

def binaryActivationFromTanh(x, threshold) :

    # convert [-inf,+inf] to [-1, 1]
    # you can skip this step if your threshold is actually within [-inf, +inf]

    activated_x = K.tanh(x)

    binary_activated_x = activated_x > threshold

    # cast the boolean array to float or int as necessary
    # you shall also cast it to Keras default
    # binary_activated_x = K.cast_to_floatx(binary_activated_x)

    return binary_activated_x

After making your custom activation function, you can use it like

x = Input(shape=(1000,))
y = Dense(10, activation=binaryActivationFromTanh)(x)

Now test the values and see if you are getting the values like you expected. You can now throw this piece into a bigger neural network.

I strongly discourage adding new layers to add restriction to your outputs, unless it is solely for activation (like keras.layers.LeakyReLU).

这篇关于限制Keras中图层的输出值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆