如何重塑 BatchDataset 类张量? [英] How to reshape BatchDataset class tensor?

查看:33
本文介绍了如何重塑 BatchDataset 类张量?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我无法重塑从我自己的自定义数据集加载的张量.如下所示,ds_train 的批次大小为 8,我想对其进行整形,例如:len(ds_train),128*128.这样我就可以将批次提供给我的 keras 自动编码器模型.我是 TF 的新手,在网上找不到解决方案,所以在这里发帖.

Im unable to reshape tensor loaded from my own custom dataset. As shown below ds_train has batch size of 8 and I want to reshape it such as: len(ds_train),128*128. So that I can feed the batch to my keras autoencoder model. Im new to TF and couldnt find solutions online, thus posting here.

ds_train = tf.keras.preprocessing.image_dataset_from_directory(

    directory=healthy_path,
    labels="inferred",
    label_mode=None,
    color_mode="grayscale",
    batch_size=8,
    image_size=(128, 128),
    shuffle=True,
    seed=123,
    validation_split=0.05,
    subset="training",)

同样,我的模型基于 TF2 Functional API,如下所示:

Similarly my model is based on TF2 Functional API as Follows:

inputs = keras.Input(shape=(128*128))
norm = layers.experimental.preprocessing.Rescaling(1./255)(inputs)
encode = layers.Dense(14, activation='relu', name='encode')(norm)
coded = layers.Dense(3, activation='relu', name='coded')(encode)
decode = layers.Dense(14, activation='relu', name='decode')(coded)
decoded = layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)

我尝试重塑

ds_train = tf.reshape(ds_train, shape=[-1])
ds_validation = tf.reshape(ds_train, shape=[-1])
#AUTOTUNE = tf.data.experimental.AUTOTUNE
#ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
#ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)

错误:

ValueError: Attempt to convert a value (<BatchDataset shapes: (None, 128, 128, 1), types: tf.float32>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>) to a Tensor.

整个错误调用堆栈:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-764a960c83e5> in <module>
----> 1 ds_train = tf.reshape(ds_train, shape=[-1])
      2 ds_validation = tf.reshape(ds_train, shape=[-1])
      3 #AUTOTUNE = tf.data.experimental.AUTOTUNE
      4 #ds_train = ds_train.cache().prefetch(buffer_size=AUTOTUNE)
      5 #ds_validation = ds_validation.cache().prefetch(buffer_size=AUTOTUNE)

C:\Anaconda3\lib\site-packages\tensorflow\python\util\dispatch.py in wrapper(*args, **kwargs)
    199     """Call target, and fall back on dispatchers if there is a TypeError."""
    200     try:
--> 201       return target(*args, **kwargs)
    202     except (TypeError, ValueError):
    203       # Note: convert_to_eager_tensor currently raises a ValueError, not a

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py in reshape(tensor, shape, name)
    193     A `Tensor`. Has the same type as `tensor`.
    194   """
--> 195   result = gen_array_ops.reshape(tensor, shape, name)
    196   tensor_util.maybe_set_static_shape(result, shape)
    197   return result

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape(tensor, shape, name)
   8227     try:
   8228       return reshape_eager_fallback(
-> 8229           tensor, shape, name=name, ctx=_ctx)
   8230     except _core._SymbolicException:
   8231       pass  # Add nodes to the TensorFlow graph.

C:\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py in reshape_eager_fallback(tensor, shape, name, ctx)
   8247 
   8248 def reshape_eager_fallback(tensor, shape, name, ctx):
-> 8249   _attr_T, (tensor,) = _execute.args_to_matching_eager([tensor], ctx)
   8250   _attr_Tshape, (shape,) = _execute.args_to_matching_eager([shape], ctx, _dtypes.int32)
   8251   _inputs_flat = [tensor, shape]

C:\Anaconda3\lib\site-packages\tensorflow\python\eager\execute.py in args_to_matching_eager(l, ctx, default_dtype)
    261       ret.append(
    262           ops.convert_to_tensor(
--> 263               t, dtype, preferred_dtype=default_dtype, ctx=ctx))
    264       if dtype is None:
    265         dtype = ret[-1].dtype

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
   1497 
   1498     if ret is None:
-> 1499       ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
   1500 
   1501     if ret is NotImplemented:

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
    336                                          as_ref=False):
    337   _ = as_ref
--> 338   return constant(v, dtype=dtype, name=name)
    339 
    340 

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name)
    262   """
    263   return _constant_impl(value, dtype, shape, name, verify_shape=False,
--> 264                         allow_broadcast=True)
    265 
    266 

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
    273       with trace.Trace("tf.constant"):
    274         return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 275     return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
    276 
    277   g = ops.get_default_graph()

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
    298 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
    299   """Implementation of eager constant."""
--> 300   t = convert_to_eager_tensor(value, ctx, dtype)
    301   if shape is None:
    302     return t

C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py in convert_to_eager_tensor(value, ctx, dtype)
     96       dtype = dtypes.as_dtype(dtype).as_datatype_enum
     97   ctx.ensure_initialized()
---> 98   return ops.EagerTensor(value, ctx.device_name, dtype)
     99 
    100 

推荐答案

尝试改变神经网络内部的形状:

Try changing the shape inside the neural net:

inputs = keras.Input(shape=(128, 128, 1))
flat = keras.layers.Flatten()(inputs)

这行得通:

import numpy as np
import tensorflow as tf

x = np.random.rand(10, 128, 128, 1).astype(np.float32)

inputs = tf.keras.Input(shape=(128, 128, 1))
flat = tf.keras.layers.Flatten()(inputs)
encode = tf.keras.layers.Dense(14, activation='relu', name='encode')(flat)
coded =  tf.keras.layers.Dense(3, activation='relu', name='coded')(encode)
decode = tf.keras.layers.Dense(14, activation='relu', name='decode')(coded)
decoded =tf.keras.layers.Dense(128*128, activation='sigmoid', name='decoded')(decode)

model = tf.keras.Model(inputs=inputs, outputs=decoded)

model.build(input_shape=x.shape)  # remove this, it's just for demonstrating

model(x)  # remove this, it's just for demonstrating

<tf.Tensor: shape=(10, 16384), dtype=float32, numpy=
array([[0.50187236, 0.4986383 , 0.50084716, ..., 0.4998364 , 0.50000435,
        0.4999416 ],
       [0.5020216 , 0.4985297 , 0.5009147 , ..., 0.4998234 , 0.5000047 ,
        0.49993694],
       [0.50179213, 0.49869663, 0.50081086, ..., 0.49984342, 0.5000042 ,
        0.4999441 ],
       ...,
       [0.5021732 , 0.49841946, 0.50098324, ..., 0.49981016, 0.50000507,
        0.49993217],
       [0.50205255, 0.49843505, 0.5009038 , ..., 0.49979147, 0.4999932 ,
        0.49991176],
       [0.50192004, 0.49860355, 0.50086874, ..., 0.49983227, 0.5000045 ,
        0.4999401 ]], dtype=float32)>

请注意,我删除了重新缩放层,我的 Tensorflow 版本中没有它.你可以把它放回去.

Note that I removed the rescaling layer, I don't have it in my Tensorflow version. You can put it right back.

这篇关于如何重塑 BatchDataset 类张量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆