HowTo 的基准测试:读取数据 [英] Benchmark of HowTo: Reading Data

查看:43
本文介绍了HowTo 的基准测试:读取数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我使用的是 tensorflow 0.10,我正在对

需要注意的重要一点是 matmul 占用了大量时间,因此读取开销并不大

现在是 reader 实施的时间表

您可以看到 QueueDequeueMany 上的操作遇到瓶颈,耗时高达 45 毫秒.

如果放大,您会看到一堆微小的 MEMCPY 和 Cast 操作,这表明某些操作仅适用于 CPU(parse_single_example),并且出队必须调度多个独立的CPU->GPU 传输

对于下面禁用 GPU 的 var 示例,我没有看到微小的操作,但 QueueDequeueMany 仍然需要超过 10 毫秒.时间似乎与批量大小成线性关系,因此存在一些根本性的缓慢.提交

I'm using tensorflow 0.10 and I was benchmarking the examples found in the official HowTo on reading data. This HowTo illustrates different methods to move data to tensorflow, using the same MNIST example.

I was surprised by the results and I was wondering if anyone has enough low-level understanding to explain what is happening.

In the HowTo there are basically 3 methods to read in data:

  • Feeding: building the mini-batch in python and passing it with sess.run(..., feed_dict={x: mini_batch})
  • Reading from files: use tf operations to open the files and create mini-batches. (Bypass handling data in python.)
  • Preloaded data: load all the data in either a single tf variable or constant and use tf functions to break that up in mini-batches. The variable or constant is pinned to the cpu, not gpu.

The scripts I used to run my benchmarks are found within tensorflow:

I ran those scripts unmodified, except for the last two because they crash --for version 0.10 at least-- unless I add an extra sess.run(tf.initialize_local_variables()).

Main Question

The time to execute 100 mini-batches of 100 examples running on a GTX1060:

  • Feeding: ~0.001 s
  • Reading from files: ~0.010 s
  • Preloaded data (constant): ~0.010 s
  • Preloaded data (variable): ~0.010 s

Those results are quite surprising to me. I would have expected Feeding to be the slowest since it does almost everything in python, while the other methods use lower-level tensorflow/C++ to carry similar operations. It is the complete opposite of what I expected. Does anyone understand what is going on?

Secondary question

I have access to another machine which has a Titan X and older NVidia drivers. The relative results were roughly in line with the above, except for Preloaded data (constant) which was catastrophically slow, taking many seconds for a single mini-batch.

Is this some known issue that performance can vary greatly with hardware/drivers?

解决方案

Update Oct 9 the slowness comes because the computation runs too fast for Python to pre-empt the computation thread and to schedule the pre-fetching threads. Computation in main thread takes 2ms and apparently that's too little for the pre-fetching thread to grab the GIL. Pre-fetching thread has larger delay and hence can always be pre-empted by computation thread. So the computation thread runs through all of the examples, and then spends most of the time blocked on GIL as some prefetching thread gets scheduled and enqueues a single example. The solution is to increase number of Python threads, increase queue size to fit the entire dataset, start queue runners, and then pause main thread for a couple of seconds to give queue runners to pre-populate the queue.

Old stuff

That's surprisingly slow.

This looks some kind of special cases making the last 3 examples unnecessarily slow (most effort went into optimizing large models like ImageNet, so MNIST didn't get as much attention).

You can diagnose the problems by getting timelines, as described here

Here are 3 of those examples with timeline collection enabled.

Here's the timeline for feed_dict implementation

The important thing to notice is that matmul takes a good chunk of the time, so the reading overhead is not significant

Now here's the timeline for reader implementation

You can see that operation is bottlenecked on QueueDequeueMany which takes whopping 45ms.

If you zoom in, you'll see a bunch of tiny MEMCPY and Cast operations, which is a sign of some op being CPU only (parse_single_example), and the dequeue having to schedule multiple independent CPU->GPU transfers

For the var example below with GPU disabled, I don't see tiny little ops, but QueueDequeueMany still takes over 10ms. The timing seems to scale linearly with batch size, so there's some fundamental slowness there. Filed #4740

这篇关于HowTo 的基准测试:读取数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆