如何在tensorflow中使用keras.utils.Sequence数据生成器和tf.distribute.MirroredStrategy进行多GPU模型训练? [英] How to use keras.utils.Sequence data generator with tf.distribute.MirroredStrategy for multi-gpu model training in tensorflow?

查看:103
本文介绍了如何在tensorflow中使用keras.utils.Sequence数据生成器和tf.distribute.MirroredStrategy进行多GPU模型训练?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想使用Tensorflow 2.0在多个GPU上训练模型.在用于分布式训练的tensorflow教程中( https://www.tensorflow.org/guide/distributed_training),将 tf.data 数据生成器转换为分布式数据集,如下所示:

I want to train a model on several GPUs using tensorflow 2.0. In the tensorflow tutorial for distributed training (https://www.tensorflow.org/guide/distributed_training), the tf.data datagenerator is converted into a distributed dataset as follows:

dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)

但是,我想改用我自己的自定义数据生成器(例如, keras.utils.Sequence 数据生成器,以及 keras.utils.data_utils.OrderedEnqueuer 用于异步批处理生成).但是 mirrored_strategy.experimental_distribute_dataset 方法仅支持 tf.data 数据生成器.我该如何使用keras数据生成器呢?

However, I want to use my own custom data generator instead (for example, the keras.utils.Sequence datagenerator, along with keras.utils.data_utils.OrderedEnqueuer for asynchronous batch generation). But the mirrored_strategy.experimental_distribute_dataset method supports only tf.data datagenerator. How do I use the keras datagenerator instead?

谢谢!

推荐答案

我在同一个 keras.utils.sequence 中使用了 tf.data.Dataset.from_generator 情况,它解决了我的问题!

I used tf.data.Dataset.from_generator with my keras.utils.sequence in the same situation, and it solved my issues!

train_generator = SegmentationMultiGenerator(datasets, folder) # My keras.utils.sequence object

def generator():
    multi_enqueuer = OrderedEnqueuer(train_generator, use_multiprocessing=True)
    multi_enqueuer.start(workers=10, max_queue_size=10)
    while True:
        batch_xs, batch_ys, dset_index = next(multi_enqueuer.get()) # I have three outputs
        yield batch_xs, batch_ys, dset_index

dataset = tf.data.Dataset.from_generator(generator,
                                         output_types=(tf.float64, tf.float64, tf.int64),
                                         output_shapes=(tf.TensorShape([None, None, None, None]),
                                                        tf.TensorShape([None, None, None, None]),
                                                        tf.TensorShape([None, None])))

strategy = tf.distribute.MirroredStrategy()

train_dist_dataset = strategy.experimental_distribute_dataset(dataset)

请注意,这是我的第一个可行的解决方案-目前,我发现最简单的方法是将"None"代替实际的输出形状,这是我发现可以使用的.

Note that this is my first working solution - at the moment I have found it most convenient to just put 'None' in the place of the real output shapes, which I have found to work.

这篇关于如何在tensorflow中使用keras.utils.Sequence数据生成器和tf.distribute.MirroredStrategy进行多GPU模型训练?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆