Dataset.from_tensors 和 Dataset.from_tensor_slices 有什么区别? [英] What is the difference between Dataset.from_tensors and Dataset.from_tensor_slices?
问题描述
我有一个数据集表示为形状 (num_features, num_examples)
的 NumPy 矩阵,我希望将其转换为 TensorFlow 类型 tf.Dataset
.
我正在努力理解这两种方法之间的区别:Dataset.from_tensors
和 Dataset.from_tensor_slices
.什么是正确的,为什么?
TensorFlow 文档(link)说这两种方法都接受嵌套张量的结构虽然当使用 from_tensor_slices
时,张量在第 0 维中应该具有相同的大小.
from_tensors
组合输入并返回包含单个元素的数据集:
from_tensor_slices
为输入张量的每一行创建一个具有单独元素的数据集:
I have a dataset represented as a NumPy matrix of shape (num_features, num_examples)
and I wish to convert it to TensorFlow type tf.Dataset
.
I am struggling trying to understand the difference between these two methods: Dataset.from_tensors
and Dataset.from_tensor_slices
. What is the right one and why?
TensorFlow documentation (link) says that both method accept a nested structure of tensor although when using from_tensor_slices
the tensor should have same size in the 0-th dimension.
from_tensors
combines the input and returns a dataset with a single element:
>>> t = tf.constant([[1, 2], [3, 4]])
>>> ds = tf.data.Dataset.from_tensors(t)
>>> [x for x in ds]
[<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1, 2],
[3, 4]], dtype=int32)>]
from_tensor_slices
creates a dataset with a separate element for each row of the input tensor:
>>> t = tf.constant([[1, 2], [3, 4]])
>>> ds = tf.data.Dataset.from_tensor_slices(t)
>>> [x for x in ds]
[<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>,
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([3, 4], dtype=int32)>]
这篇关于Dataset.from_tensors 和 Dataset.from_tensor_slices 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!