包装Tensorflow以在Keras中使用 [英] Wrapping Tensorflow For Use in Keras

查看:133
本文介绍了包装Tensorflow以在Keras中使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我在项目的其余部分中使用Keras,但也希望利用Tensorflow已实现的Bahdanau注意模块(请参阅tf.contrib.seq2seq.BahdanauAttention).我一直在尝试通过Keras Layer约定来实现这一点,但是不确定是否合适.

I'm using Keras for the rest of my project, but also hoping to make use of the Bahdanau attention module that Tensorflow has implemented (see tf.contrib.seq2seq.BahdanauAttention). I've been attempting to implement this via the Keras Layer convention, but not sure whether this is an appropriate fit.

是否有一些约定以这种方式包装Tensorflow组件以使其与计算图兼容?

Is there some convention for wrapping Tensorflow components in this way to be compatible with the computation graph?

我已经包含了到目前为止所写的代码(尚无法使用),并且希望能得到一些指针.

I've included the code that I've written thus far (not working yet) and would appreciate any pointers.

from keras import backend as K
from keras.engine.topology import Layer
from keras.models import Model
import numpy as np
import tensorflow as tf

class BahdanauAttention(Layer):

# The Bahdanau attention layer has to attend to a particular set of memory states
# These are usually the output of some encoder process, where we take the output of
# GRU states
def __init__(self, memory, num_units, **kwargs):
    self.memory = memory
    self.num_units = num_units
    super(BahdanauAttention, self).__init__(**kwargs)

def build(self, input_shape):
    # The attention component will be in control of attending to the given memory
    attention = tf.contrib.seq2seq.BahdanauAttention(self.num_units, self.memory)
    cell = tf.contrib.rnn.GRUCell(num_units)

    cell_with_attention = tf.contrib.seq2seq.DynamicAttentionWrapper(cell, attention, num_units)
    self.outputs, _ = tf.nn.dynamic_rnn(cell_with_attention, inputs, dtype=tf.float32)

    super(MyLayer, self).build(input_shape)

def call(self, x):
    return

def compute_output_shape(self, input_shape):
    return (input_shape[0], self.memory[1], self.num_units)

推荐答案

Keras的较新版本使用tf.keras.layers.AdditiveAttention().这应该可以立即使用.

The newer version of Keras uses tf.keras.layers.AdditiveAttention(). This should work off the shelf.

或者,可以编写自定义的Bahdanau层,如以下六行代码所示:

Alternatively a custom Bahdanau layer can be written as shown in half a dozen lines of code: Custom Attention Layer using in Keras

这篇关于包装Tensorflow以在Keras中使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆