为什么在导入numpy之后多处理仅使用单个内核? [英] Why does multiprocessing use only a single core after I import numpy?

查看:204
本文介绍了为什么在导入numpy之后多处理仅使用单个内核?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我不确定这是否会成为操作系统问题,但我想在这里问一下,以防有人对Python有所了解.

I am not sure whether this counts more as an OS issue, but I thought I would ask here in case anyone has some insight from the Python end of things.

我一直在尝试使用joblib并行化CPU繁重的for循环,但是我发现不是将每个工作进程分配给不同的内核,而是最终将所有工作进程分配给了相同的内核,没有性能提升.

I've been trying to parallelise a CPU-heavy for loop using joblib, but I find that instead of each worker process being assigned to a different core, I end up with all of them being assigned to the same core and no performance gain.

这是一个非常琐碎的例子...

Here's a very trivial example...

from joblib import Parallel,delayed
import numpy as np

def testfunc(data):
    # some very boneheaded CPU work
    for nn in xrange(1000):
        for ii in data[0,:]:
            for jj in data[1,:]:
                ii*jj

def run(niter=10):
    data = (np.random.randn(2,100) for ii in xrange(niter))
    pool = Parallel(n_jobs=-1,verbose=1,pre_dispatch='all')
    results = pool(delayed(testfunc)(dd) for dd in data)

if __name__ == '__main__':
    run()

...这是运行此脚本时在htop中看到的内容:

...and here's what I see in htop while this script is running:

我正在4核笔记本电脑上运行Ubuntu 12.10(3.5.0-26).显然,joblib.Parallel正在为不同的工作人员生成单独的进程,但是有什么方法可以使这些进程在不同的内核上执行?

I'm running Ubuntu 12.10 (3.5.0-26) on a laptop with 4 cores. Clearly joblib.Parallel is spawning separate processes for the different workers, but is there any way that I can make these processes execute on different cores?

推荐答案

经过更多的谷歌搜索后,我找到了答案这里.

After some more googling I found the answer here.

事实证明,某些Python模块(numpyscipytablespandasskimage ...)在导入时与核心相似性混乱.据我所知,这个问题似乎是由它们链接到多线程OpenBLAS库引起的.

It turns out that certain Python modules (numpy, scipy, tables, pandas, skimage...) mess with core affinity on import. As far as I can tell, this problem seems to be specifically caused by them linking against multithreaded OpenBLAS libraries.

一种解决方法是使用重置任务亲和力

A workaround is to reset the task affinity using

os.system("taskset -p 0xff %d" % os.getpid())

在导入模块后粘贴此行,我的示例现在在所有内核上运行:

With this line pasted in after the module imports, my example now runs on all cores:

到目前为止,我的经验是,这似乎对numpy的性能没有任何负面影响,尽管这可能是特定于机器和任务的.

My experience so far has been that this doesn't seem to have any negative effect on numpy's performance, although this is probably machine- and task-specific .

还有两种方法可以禁用OpenBLAS本身的CPU关联性重置行为.例如,在运行时,您可以使用环境变量OPENBLAS_MAIN_FREE(或GOTOBLAS_MAIN_FREE)

There are also two ways to disable the CPU affinity-resetting behaviour of OpenBLAS itself. At run-time you can use the environment variable OPENBLAS_MAIN_FREE (or GOTOBLAS_MAIN_FREE), for example

OPENBLAS_MAIN_FREE=1 python myscript.py

或者,如果您从源代码编译OpenBLAS,则可以在构建时通过编辑Makefile.rule以包含该行来永久禁用它

Or alternatively, if you're compiling OpenBLAS from source you can permanently disable it at build-time by editing the Makefile.rule to contain the line

NO_AFFINITY=1

这篇关于为什么在导入numpy之后多处理仅使用单个内核?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆