如何在Python中并行化生成器/迭代器管道? [英] How can I parallelize a pipeline of generators/iterators in Python?

查看:295
本文介绍了如何在Python中并行化生成器/迭代器管道?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

假设我有一些Python代码,如下所示:

Suppose I have some Python code like the following:

input = open("input.txt")
x = (process_line(line) for line in input)
y = (process_item(item) for item in x)
z = (generate_output_line(item) + "\n" for item in y)
output = open("output.txt", "w")
output.writelines(z)

此代码从输入文件中读取每一行,通过多个函数运行它,并将输出写入输出文件。现在知道函数 process_line process_item generate_output_line 永远不会相互干扰,让我们假设输入和输出文件位于不同的磁盘上,这样读写就不会相互干扰。

This code reads each line from the input file, runs it through several functions, and writes the output to the output file. Now I know that the functions process_line, process_item, and generate_output_line will never interfere with each other, and let's assume that the input and output files are on separate disks, so that reading and writing will not interfere with each other.

但Python可能不知道这些。我的理解是Python将读取一行,依次应用每个函数,并将结果写入输出,然后在将第一行发送到输出后仅读取第二行在第一行退出之前,第二行不会进入管道。我是否正确理解该程序将如何流动?如果这是它的工作方式,是否有任何简单的方法可以使多行可以同时在管道中,这样程序就可以并行读取,写入和处理每一步?

But Python probably doesn't know any of this. My understanding is that Python will read one line, apply each function in turn, and write the result to the output, and then it will read the second line only after sending the first line to the output, so that the second line does not enter the pipeline until the first one has exited. Do I understand correctly how this program will flow? If this is how it works, is there any easy way to make it so that multiple lines can be in the pipeline at once, so that the program is reading, writing, and processing each step in parallel?

推荐答案

您无法真正并行化读取或写入文件;这些将是你的瓶颈,最终。你确定你的瓶颈是CPU,而不是I / O吗?

You can't really parallelize reading from or writing to files; these will be your bottleneck, ultimately. Are you sure your bottleneck here is CPU, and not I/O?

由于你的处理不包含任何依赖(根据你),它是简单易用 Python的multiprocessing.Pool类

Since your processing contains no dependencies (according to you), it's trivially simple to use Python's multiprocessing.Pool class.

有几种方法可以写这个,但更简单调试是为了找到独立的关键路径(代码的最慢部分),我们将使它们并行运行。我们假设它是process_item。

There are a couple ways to write this, but the easier w.r.t. debugging is to find independent critical paths (slowest part of the code), which we will make run parallel. Let's presume it's process_item.

......实际上就是这样。代码:

…And that's it, actually. Code:

import multiprocessing.Pool

p = multiprocessing.Pool() # use all available CPUs

input = open("input.txt")
x = (process_line(line) for line in input)
y = p.imap(process_item, x)
z = (generate_output_line(item) + "\n" for item in y)
output = open("output.txt", "w")
output.writelines(z)

我还没有测试过,但这是基本的想法。 Pool的imap方法确保以正确的顺序返回结果。

I haven't tested it, but this is the basic idea. Pool's imap method makes sure results are returned in the right order.

这篇关于如何在Python中并行化生成器/迭代器管道?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆