tf.data.Dataset.interleave() 与 map() 和 flat_map() 究竟有何不同? [英] How exactly does tf.data.Dataset.interleave() differ from map() and flat_map()?

查看:33
本文介绍了tf.data.Dataset.interleave() 与 map() 和 flat_map() 究竟有何不同?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前的理解是:

不同的map_func:interleaveflat_map 都期望一个将数据集元素映射到数据集的函数"".相比之下,map 期望一个函数将一个数据集元素映射到另一个数据集元素".

参数:interleavemap 都提供参数 num_parallel_calls,而 flat_map 不提供.此外,interleave 提供了这些神奇的参数 block_length 和 cycle_length.对于 cycle_length=1,文档说明 interleave 和 flat_map 的输出相等.

最后,我看到了

在图像中,有 4 个重叠的数据集,由参数 cycle_length 决定,所以在这种情况下 cycle_length = 4.


FLAT_MAP: 在整个数据集中映射函数并将结果展平.如果您想确保订单保持不变,您可以使用它.它不以 num_parallel_calls 作为参数.请参阅

图片来源:deeplearning.ai

My current understanding is:

Different map_func: Both interleave and flat_map expect "A function mapping a dataset element to a dataset". In contrast, map expects "A function mapping a dataset element to another dataset element".

Arguments: Both interleave and map offer the argument num_parallel_calls, whereas flat_map does not. Moreover, interleave offers these magical arguments block_length and cycle_length. For cycle_length=1, the documentation states that the outputs of interleave and flat_map are equal.

Last, I have seen data loading pipelines without interleave as well as ones with interleave. Any advice when to use interleave vs. map or flat_map would be greatly appreciated


//EDIT: I do see the value of interleave, if we start out with different datasets, such as in the code below

  files = tf.data.Dataset.list_files("/path/to/dataset/train-*.tfrecord")
  dataset = files.interleave(tf.data.TFRecordDataset)

However, is there any benefit of using interleave over map in a scenario such as the one below?

files = tf.data.Dataset.list_files("/path/to/dataset/train-*.png")
dataset = files.map(load_img, num_parallel_calls=tf.data.AUTOTUNE)

解决方案

Edit:

Can map not also be used to parallelize I/O?

Indeed, you can read images and labels from a directory with map function. Assume this case:

list_ds = tf.data.Dataset.list_files(my_path)

def process_path(path):
 ### get label here etc. Images need to be decoded
 return tf.io.read_file(path), label

new_ds = list_ds.map(process_path,num_parallel_calls=tf.data.experimental.AUTOTUNE)

Note that, now it is multi-threaded as num_parallel_calls has been set.

The advantage of interlave() function:

  • Suppose you have a dataset
  • With cycle_length you can out that many elements from the dataset, i.e 5, then 5 elements are out from the dataset and a map_func can be applied.
  • After, fetch dataset objects from newly generated objects, block_length pieces of data each time.

In other words, interleave() function can iterate through your dataset while applying a map_func(). Also, it can work with many datasets or data files at the same time. For example, from the docs:

  dataset = dataset.interleave(lambda x:
    tf.data.TextLineDataset(x).map(parse_fn, num_parallel_calls=1),
    cycle_length=4, block_length=16)

However, is there any benefit of using interleave over map in a scenario such as the one below?

Both interleave() and map() seems a bit similar but their use-case is not the same. If you want to read dataset while applying some mapping interleave() is your super-hero. Your images may need to be decoded while being read. Reading all first, and decoding may be inefficient when working with large datasets. In the code snippet you gave, AFAIK, the one with tf.data.TFRecordDataset should be faster.

TL;DR interleave() parallelizes the data loading step by interleaving the I/O operation to read the file.

map() will apply the data pre-processing to the contents of the datasets.

So you can do something like:

ds = train_file.interleave(lambda x: tf.data.Dataset.list_files(directory_here).map(func,
                            num_parallel_calls=tf.data.experimental.AUTOTUNE)

tf.data.experimental.AUTOTUNE will decide the level of parallelism for buffer size, CPU power, and also for I/O operations. In other words, AUTOTUNE will handle the level dynamically at runtime.

num_parallel_calls argument spawns multiple threads to utilize multiple cores for parallelizing the tasks. With this you can load multiple datasets in parallel, reducing the time waiting for the files to be opened; as interleave can also take an argument num_parallel_calls. Image is taken from docs.

In the image, there are 4 overlapping datasets, that is determined by the argument cycle_length, so in this case cycle_length = 4.


FLAT_MAP: Maps a function across the dataset and flattens the result. If you want to make sure order stays the same you can use this. And it does not take num_parallel_calls as an argument. Please refer docs for more.

MAP: The map function will execute the selected function on every element of the Dataset separately. Obviously, data transformations on large datasets can be expensive as you apply more and more operations. The key point is, it can be more time consuming if CPU is not fully utilized. But we can use parallelism APIs:

num_of_cores = multiprocessing.cpu_count() # num of available cpu cores
mapped_data = data.map(function, num_parallel_calls = num_of_cores)

For cycle_length=1, the documentation states that the outputs of interleave and flat_map are equal

cycle_length --> The number of input elements that will be processed concurrently. When set it to 1, it will be processed one-by-one.

INTERLEAVE: Transformation operations like map can be parallelized.

With parallelism of the map, at the top the CPU is trying to achieve parallelization in transformation, but the extraction of data from the disk can cause overhead.

Besides, once the raw bytes are read into memory, it may also be necessary to map a function to the data, which of course, requires additional computation. Like decrypting data etc. The impact of the various data extraction overheads needs to be parallelized in order to mitigate this with interleaving the contents of each dataset.

So while reading the datasets, you want to maximize:

Source of image: deeplearning.ai

这篇关于tf.data.Dataset.interleave() 与 map() 和 flat_map() 究竟有何不同?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆