如何在TensorFlow 2.0中使用tf.Lambda和tf.Variable [英] How to use tf.Lambda and tf.Variable at TensorFlow 2.0

查看:689
本文介绍了如何在TensorFlow 2.0中使用tf.Lambda和tf.Variable的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我是TensorFlow 2.0的新手.

I'm very new to TensorFlow 2.0.

我为循环GAN编写了如下代码(我仅提取代码来构建发电机神经网络):

I wrote a code for Cyclic GAN as follows (I extract code only for building generator neural network):

def instance_norm(x, epsilon=1e-5):

    scale = tf.Variable(initial_value=np.random.normal(1., 0.02, x.shape[-1:]),
                        trainable=True,
                        name='SCALE',
                        dtype=tf.float32)
    offset = tf.Variable(initial_value=np.zeros(x.shape[-1:]),
                         trainable=True,
                         name='OFFSET',
                         dtype=tf.float32)
    mean, variance = tf.nn.moments(x, axes=[1, 2], keepdims=True)
    inv = tf.math.rsqrt(variance + epsilon)
    normalized = (x - mean) * inv
    return scale * normalized + offset

def build_generator(options, name='Generator'):

    initializer = tf.random_normal_initializer(0., 0.02)

    inputs = Input(shape=(options.time_step,
                          options.pitch_range,
                          options.output_nc))

    x = inputs
    # (batch * 64 * 84 * 1)

    x = layers.Lambda(padding,
                      name='PADDING_1')(x)
    # (batch * 70 * 90 * 1)

    x = layers.Conv2D(filters=options.gf_dim,
                      kernel_size=7,
                      strides=1,
                      padding='valid',
                      kernel_initializer=initializer,
                      use_bias=False,
                      name='CONV2D_1')(x)
    x = layers.Lambda(instance_norm,
                      name='IN_1')(x)
    x = layers.ReLU()(x)

但是当我运行这段代码时,出现如下错误:

but when I run this code, get an error as follows:

Traceback (most recent call last):
  File "tf2_main.py", line 50, in <module>
    model = CycleGAN(args)
  File "/Users/mhiro/PycharmProjects/music_gan/CycleGAN-Music-Style-Transfer-Refactorization-master/tf2_model.py", line 55, in __init__
    self._build_model(args)
  File "/Users/mhiro/PycharmProjects/music_gan/CycleGAN-Music-Style-Transfer-Refactorization-master/tf2_model.py", line 63, in _build_model
    name='Generator_A2B')
  File "/Users/mhiro/PycharmProjects/music_gan/CycleGAN-Music-Style-Transfer-Refactorization-master/tf2_module.py", line 154, in build_generator
    name='IN_1')(x)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 773, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/core.py", line 847, in call
    self._check_variables(created_variables, tape.watched_variables())
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow_core/python/keras/layers/core.py", line 873, in _check_variables
    raise ValueError(error_str)
ValueError: 
The following Variables were created within a Lambda layer (IN_1)
but are not tracked by said layer:
  <tf.Variable 'IN_1/SCALE:0' shape=(64,) dtype=float32>
  <tf.Variable 'IN_1/OFFSET:0' shape=(64,) dtype=float32>
The layer cannot safely ensure proper Variable reuse across multiple
calls, and consquently this behavior is disallowed for safety. Lambda
layers are not well suited to stateful computation; instead, writing a
subclassed Layer is the recommend way to define layers with
Variables.

似乎我应该重写tf.Lambda和tf.Variable部分.

It seems that I should rewrite tf.Lambda and tf.Variable part.

有人可以教我如何重写此代码吗?

Could anyone teach me How I should rewrite this code?

推荐答案

Lambda 层是无状态的,也就是说,您无法在其中定义变量.相反,您宁可编写自定义图层.类似于以下内容:

Lambda layers are stateless, that is, you cannot define variables within them. Instead, you could rather write a custom layer. Something along the lines of:

import tensorflow as tf
from tensorflow.keras import layers

class InstanceNorm(layers.Layer):
    def __init__(self):
        super(InstanceNorm, self).__init__()

    def build(self, input_shape):
        self.scale = self.add_weight(shape=your_shape_1,
                                 initializer=your_initializer_1,
                                 trainable=True)
        self.offset = self.add_weight(shape=your_shape_2,
                                 initializer=your_initializer_2,
                                 trainable=True)

  def call(self, x, epsilon=1e-5):
        mean, variance = tf.nn.moments(x, axes=[1, 2], keepdims=True)
        inv = tf.math.rsqrt(variance + epsilon)
        normalized = (x - mean) * inv
        return self.scale * normalized + self.offset

此层现在可以称为:

...
x = layers.Conv2D(filters=options.gf_dim,
                  kernel_size=7,
                  strides=1,
                  padding='valid',
                  kernel_initializer=initializer,
                  use_bias=False,
                  name='CONV2D_1')(x)
x = InstanceNorm()(x)
x = layers.ReLU()(x)
...

注意:未经测试.

这篇关于如何在TensorFlow 2.0中使用tf.Lambda和tf.Variable的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆