Keras自定义图层2D输入-> 2D输出 [英] Keras Custom Layer 2D input -> 2D output

查看:175
本文介绍了Keras自定义图层2D输入-> 2D输出的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个2D输入(如果考虑样本数,则为3D),我想应用一个可取此输入并输出另一个2D矩阵的keras层.因此,例如,如果我有一个大小为(ExV)的输入,则学习权重矩阵将为(SxE),输出为(SxV).我可以使用密集层吗?

I have an 2D input (or 3D if one consider the number of samples) and I want to apply a keras layer that would take this input and outputs another 2D matrix. So, for example, if I have an input with size (ExV), the learning weight matrix would be (SxE) and the output (SxV). Can I do this with Dense layer?

编辑(Nassim请求):

EDIT (Nassim request):

第一层什么都不做.只是为Lambda层提供输入:

The first layer is doing nothing. It's just to give an input to Lambda layer:

from keras.models import Sequential
from keras.layers.core import Reshape,Lambda
from keras import backend as K
from keras.models import Model

input_sample = [
[[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15],[16,17,18,19,20]]
,[[21,22,23,24,25],[26,27,28,29,30],[31,32,33,34,35],[36,37,38,39,40]]
,[[41,42,43,44,45],[46,47,48,49,50],[51,52,53,54,55],[56,57,58,59,60]]
]


model = Sequential()
model.add(Reshape((4,5), input_shape=(4,5)))
model.add(Lambda(lambda x: K.transpose(x)))
intermediate_layer_model = Model(input=model.input,output=model.layers[0].output)
print "First layer:"
print intermediate_layer_model.predict(input_sample)
print ""
print "Second layer:"
intermediate_layer_model = Model(input=model.input,output=model.layers[1].output)
print intermediate_layer_model.predict(input_sample)

推荐答案

这取决于您要执行的操作.因为是序列,所以是2D吗?然后使用LSTM进行创建,如果您设置了return_sequence = True,则会返回一个所需大小的序列.

It depends on what you want to do. Is it 2D because it's a sequence? Then LSTM are made for that and will return a sequence if desired size if you set return_sequence=True.

CNN也可以在2D输入上工作,并根据您使用的内核数量输出大小可变的内容.

CNN's can also work on 2D inputs and will output something of variable size depending on the number of kernels you use.

否则,您可以将其重塑为(E x V,)1D张量,使用具有SxV尺寸的密集层并将输出重塑为(S,V)2D张量...

Otherwise you can reshape it to a (E x V, ) 1D tensor, use a Dense layer with SxV dimension and reshape the output to a (S,V) 2D tensor...

我不能为您提供更多帮助,我们需要知道您的用例:-)神经网络的可能性太多.

I can not help you more, we need to know your usecase :-) there are too many possibilities with neural networks.

您可以使用TimeDistributed(Dense(S)). 如果输入的形状为(E,V),则将其重塑为(V,E)以将V作为时间维".然后应用TimeDistributed(Dense(S)),它将是一个具有权重(ExS)的密集层,输出将具有形状(V,S),因此可以将其重塑为(S,V).

You can use TimeDistributed(Dense(S)). If your input has a shape (E,V), you reshape to (V,E) to have V as the "time dimension". Then you apply TimeDistributed(Dense(S)) which will be a dense layer with weights (ExS), the output will have the shape (V,S) so you can reshape it to (S,V).

这能满足您的需求吗? TimeDistributed()层会将相同的Dense(S)层应用于具有共享权重的输入的每条V行.

Does that make what you want ? The TimeDistributed() layer will apply the same Dense(S) layer to every V lines of your input with shared weights.

看完keras后端的代码后,发现要使用带有'permutation pattern'选项的tensorflow转置,需要使用K.permute_dimensions(x,pattern).批次尺寸必须包括在内.就您而言:

After looking at the code of keras backend, it turns out that to use the transpose from tensorflow with 'permutation patterns' option available, you need to use K.permute_dimensions(x,pattern). The batch dimension must be included. In your case :

Lambda(lambda x: K.permute_dimensions(x,[0,2,1]))

K.transpose(x)内部使用相同的函数(用于tf后端),但置换设置为默认值[n,n-1,...,0].

K.transpose(x) uses the same function internally (for tf backend) but permutations is set to the default value which is [n,n-1,...,0].

这篇关于Keras自定义图层2D输入-> 2D输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆