是否有一个简单的基于进程的python并行映射? [英] Is there a simple process-based parallel map for python?

查看:102
本文介绍了是否有一个简单的基于进程的python并行映射?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在寻找一个简单的基于进程的python并行映射,即一个函数

I'm looking for a simple process-based parallel map for python, that is, a function

parmap(function,[data])

会在不同进程上的[data]的每个元素上运行函数(嗯,在不同的内核上,但是AFAIK,在python中的不同内核上运行的唯一方法是启动多个解释器),并返回一个结果列表.

that would run function on each element of [data] on a different process (well, on a different core, but AFAIK, the only way to run stuff on different cores in python is to start multiple interpreters), and return a list of results.

是否存在类似的东西?我想要一些简单的东西,所以一个简单的模块会很好.当然,如果不存在这样的东西,我将定购一个大型图书馆:-/

Does something like this exist? I would like something simple, so a simple module would be nice. Of course, if no such thing exists, I will settle for a big library :-/

推荐答案

我似乎需要的是

地图(函数,可迭代[,块大小])

A parallel equivalent of the map() built-in function (it supports only
one iterable argument though). It blocks till the result is ready.

This method chops the iterable into a number of chunks which it submits to the 
process pool as separate tasks. The (approximate) size of these chunks can be 
specified by setting chunksize to a positive integ

例如,如果要映射此功能:

For example, if you wanted to map this function:

def f(x):
    return x**2

到range(10),您可以使用内置的map()函数来实现:

to range(10), you could do it using the built-in map() function:

map(f, range(10))

或使用multiprocessing.Pool()对象的方法map():

or using a multiprocessing.Pool() object's method map():

import multiprocessing
pool = multiprocessing.Pool()
print pool.map(f, range(10))

这篇关于是否有一个简单的基于进程的python并行映射?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆