如果在keras/tensorflow中执行的Lambda层 [英] Lambda layer to perform if then in keras/tensorflow

查看:667
本文介绍了如果在keras/tensorflow中执行的Lambda层的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在用这个把头发扯掉.

I'm tearing my hair out with this one.

我在这里问了一个问题如果再在里面自定义不可训练的keras图层,但我仍然遇到困难.

I asked a question over here If then inside custom non-trainable keras layer but I'm still having difficulties.

我尝试了他的解决方案,但是没有用-我以为我会在他的解决方案中张贴完整的代码

I tried his solution, but it didn't work - I thought I'd post my complete code with his solution

我有一个定制的Keras层,我想从特定的输入返回特定的输出.我不希望它是可训练的.

I have a custom Keras layer that I want to return specific output from specific inputs. I don't want it to be trainable.

该图层应执行以下操作

if input = [1,0] then output = 1
if input = [0,1] then output = 0

这是执行此操作的lambda层代码:

Here's the lambda layer code for doing this:

input_tensor = Input(shape=(n_hots,))


def custom_layer_1(tensor):
    if tensor == [1,0]:
        resp_1 = np.array([1,],dtype=np.int32)
        k_resp_1 = backend.variable(value=resp_1)
        return k_resp_1
    elif tensor == [0,1]:
        resp_0 = np.array([0,],dtype=np.int32)
        k_resp_0 = backend.variable(value=resp_0)
        return k_resp_0
    else:
        resp_e = np.array([-1,])
        k_resp_e = backend.variable(value=resp_e)
        return k_resp_e
    print(tensor.shape)

layer_one = keras.layers.Lambda(custom_layer_1,output_shape = (None,))(input_tensor)


_model = Model(inputs=input_tensor, outputs = layer_one)

当我拟合模型时,尽管有输入,它始终会计算-1.

When i fit my model it always computes -1 despite the inputs.

这是模型的样子:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 2)                 0         
_________________________________________________________________
lambda_1 (Lambda)            (None, None)              0         
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0

这是模型的完整代码:

import numpy as np
from keras.models import Model
from keras import layers
from keras import Input
from keras import backend
import keras
from keras import models
import tensorflow as tf


# Generate the datasets:
n_obs = 1000

n_hots = 2

obs_mat = np.zeros((n_obs,n_hots),dtype=np.int32)

resp_mat = np.zeros((n_obs,1),dtype=np.int32)

# which position in the array should be "hot" ?
hot_locs = np.random.randint(n_hots, size=n_obs)

# set the bits:
for row,loc in zip(np.arange(n_obs),hot_locs):
    obs_mat[row,loc] = 1

for idx in np.arange(n_obs):
    if( (obs_mat[idx,:]==[1,0]).all() == True ):
        resp_mat[idx] = 1
    if( (obs_mat[idx,:]==[0,1]).all() == True ):
        resp_mat[idx] = 0

# test data:
test_suite = np.identity(n_hots)

# Build the network
input_tensor = Input(shape=(n_hots,))


def custom_layer_1(tensor):
    if tensor == [1,0]:
        resp_1 = np.array([1,],dtype=np.int32)
        k_resp_1 = backend.variable(value=resp_1)
        return k_resp_1
    elif tensor == [0,1]:
        resp_0 = np.array([0,],dtype=np.int32)
        k_resp_0 = backend.variable(value=resp_0)
        return k_resp_0
    else:
        resp_e = np.array([-1,])
        k_resp_e = backend.variable(value=resp_e)
        return k_resp_e
    print(tensor.shape)

layer_one = keras.layers.Lambda(custom_layer_1,output_shape = (None,))(input_tensor)


_model = Model(inputs=input_tensor, outputs = layer_one)

# compile
_model.compile(optimizer="adam",loss='mse')

#train (even thought there's nothing to train)
history_mdl = _model.fit(obs_mat,resp_mat,verbose=True,batch_size = 100,epochs = 10)

# test
_model.predict(test_suite)
# outputs: array([-1., -1.], dtype=float32)

test = np.array([1,0])
test = test.reshape(1,2)
_model.predict(test,verbose=True)
# outputs: -1

这似乎很简单,为什么不起作用?谢谢

This seems like fairly simple stuff, why isn't it working? Thanks

推荐答案

有几个原因:

  • 您正在比较2D张量(samples, hots)和1D张量(hots).
  • 您没有在任何结果中考虑批次大小.
  • tf是张量框架时,使用普通的if可能无法获得良好的结果.
  • You're comparing a 2D tensor (samples, hots) with a 1D tensor (hots).
  • You didn't consider the batch size in any of the results.
  • You might not be getting good results with a plain if while tf is a tensor framework.

所以,建议是:

from keras import backend as K

def custom_layer(tensor):
    #comparison tensors with compatible shape 2D: (dummy_batch, hots)
    t10 = K.reshape(K.constant([1,0]), (1,2))
    t01 = K.reshape(K.constant([0,1]), (1,2))

    #comparison results - elementwise - shape (batch_size, 2)
    is_t10 = K.equal(tensor, t10)
    is_t01 = K.equal(tensor, t01)

    #comparison results - per sample - shape (batch_size,)
    is_t10 = K.all(is_t10, axis=-1)
    is_t01 = K.all(is_t01, axis=-1)

    #result options
    zeros = K.zeros_like(is_t10, dtype='float32') #shape (batch_size,)
    ones = K.ones_like(is_t10, dtype='float32')   #shape (batch_size,)
    negatives = -ones                             #shape (batch_size,)

    #selecting options
    result_01_or_else = K.switch(is_t01, zeros, negatives)
    result = K.switch(is_t10, ones, result_01_or_else)

    return result

警告:

  • 该层是不可微的(它返回常数)-您将无法训练该层之前的任何内容,并且如果尝试尝试,将出现针对梯度的操作具有None的"错误.
  • 输入tensor不能是其他层的输出,因为您要求它是精确的1或0.
  • this layer is not differentiable (it returns constants) - you will not be able to train anything that comes before this layer and if you try you will get "An operation has None for gradient" error.
  • The input tensor cannot be outputs of other layers because you're requiring it to be exact ones or zeros.

这篇关于如果在keras/tensorflow中执行的Lambda层的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆