Tensorflow (Keras API) `model.fit` 方法返回“无法转换类型为 <class 'tuple'> 的对象;到张量"错误 [英] Tensorflow (Keras API) `model.fit` method returns &quot;Failed to convert object of type &lt;class &#39;tuple&#39;&gt; to Tensor&quot; error

查看:88
本文介绍了Tensorflow (Keras API) `model.fit` 方法返回“无法转换类型为 <class 'tuple'> 的对象;到张量"错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我根据 tf.random.normal 方法(或通过 keras API 的 K.random_normal)使用高斯噪声.

它在自定义 tensorflow Layer 中使用,而后者又由自定义 Model 使用.

出于某种原因,当直接调用层/模型或通过 tf.GradientTape() 使用自定义训练循环时,一切都按预期工作,但在尝试使用时会引发令人费解的错误fit 方法代替.

这似乎与推断批次维度有关,在调用 fit 方法时显示为 None.

我怀疑这与编译和符号张量 vs. 渴望张量有关,但我不知道这将如何实际解决?

我已尝试将问题简化为重现该问题的最小示例:

将 numpy 导入为 np将 tensorflow.keras.backend 导入为 K从 tensorflow.keras 导入模型将张量流导入为 tf类演示(模型.模型):def __init__(self):super().__init__()定义调用(自我,输入,训练=无,掩码=无):#批处理给出2"直接调用或通过 GradientTape() 调用时# 给出无"当通过 fit 调用时批次 = K.int_shape(inputs)[0]暗淡 = K.int_shape(inputs)[1]噪声 = tf.random.normal(shape=(batch, dim), mean=0.0, stddev=1.0)# 手动指定批次维度确实有效,例如# 噪声 = tf.random.normal(shape=(2, dim), mean=0.0, stddev=1.0)返回输入 * 噪音test_data = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]])测试员 = 演示()tester.compile(optimizer='adam')# 手动调用有效打印(测试数据 - 测试员(测试数据))# 但调用 fit 不tester.fit(x=test_data)# raises: TypeError: Failed to convert object of type <class 'tuple'>到张量.# 内容:(无,4).考虑将元素转换为支持的类型.

对可能出现的问题有什么建议吗?

解决方案

call方法中,而不是使用keras.backend来获取batchcode>和dim,直接使用tensorflow.

batch = tf.shape(inputs)[0]暗淡 = tf.shape(输入)[1]

I am using gaussian noise per the tf.random.normal method (or K.random_normal via the keras API).

It is used from within a custom tensorflow Layer, which, in turn, is used by a custom Model.

For some reason, everything works as intended when calling the layer / model directly, or when using a custom training loop via tf.GradientTape(), but it throws a puzzling error when attempting to use the fit method instead.

It appears to have something to do with inferring the batch dimension, which presents as None when calling the fit method.

I suspect this has something to do with compilation and symbolic tensors vs. eager tensors, but I'm none-the-wiser as to how this would actually be resolved?

I've tried to strip the problem down to a minimal example that reproduces the issue:

import numpy as np
import tensorflow.keras.backend as K
from tensorflow.keras import models
import tensorflow as tf

class Demo(models.Model):

    def __init__(self):
        super().__init__()

    def call(self, inputs, training=None, mask=None):
        # batch gives "2" when called directly or via GradientTape()
        # gives "None" when called via fit
        batch = K.int_shape(inputs)[0]
        dim = K.int_shape(inputs)[1]
        noise = tf.random.normal(shape=(batch, dim), mean=0.0, stddev=1.0)
        # manually specifying the batch dimension does work, e.g.
        # noise = tf.random.normal(shape=(2, dim), mean=0.0, stddev=1.0)
        return inputs * noise

test_data = np.array([[1., 2., 3., 4.], [5., 6., 7., 8.]])

tester = Demo()
tester.compile(optimizer='adam')

# manual calling works
print(test_data - tester(test_data))

# but calling fit does not
tester.fit(x=test_data)
# raises: TypeError: Failed to convert object of type <class 'tuple'> to Tensor.
# Contents: (None, 4). Consider casting elements to a supported type.

Any suggestions for what the problem might be?

解决方案

In the call method, instead of using keras.backend to get batch and dim, use tensorflow directly.

batch = tf.shape(inputs)[0]
dim = tf.shape(inputs)[1]

这篇关于Tensorflow (Keras API) `model.fit` 方法返回“无法转换类型为 <class 'tuple'> 的对象;到张量"错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆