如何在python中使用C扩展来解决GIL [英] How to use C extensions in python to get around GIL

查看:341
本文介绍了如何在python中使用C扩展来解决GIL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在Python中跨多个核心运行cpu密集型程序,并试图弄清楚如何编写C扩展来做到这一点.是否有任何代码示例或教程?

I want to run a cpu intensive program in Python across multiple cores and am trying to figure out how to write C extensions to do this. Are there any code samples or tutorials on this?

推荐答案

您已经可以将Python程序分为多个进程.操作系统已经在所有核心上分配了您的进程.

You can already break a Python program into multiple processes. The OS will already allocate your processes across all the cores.

执行此操作.

python part1.py | python part2.py | python part3.py | ... etc.

操作系统将确保该部分使用尽可能多的资源.您可以通过在sys.stdinsys.stdout上使用cPickle沿此管道轻松传递信息.

The OS will assure that part uses as many resources as possible. You can trivially pass information along this pipeline by using cPickle on sys.stdin and sys.stdout.

没有太多的工作,这通常会导致戏剧性的加速.

Without too much work, this can often lead to dramatic speedups.

是的-对haterz来说,有可能构建一种折磨得如此之快的算法.但是,这通常会为最小的工作带来巨大的好处.

Yes -- to the haterz -- it's possible to construct an algorithm so tortured that it may not be sped up much. However, this often yields huge benefits for minimal work.

然后.

用于此目的的重组将完全匹配最大限度地提高线程并发性所需的重组.所以.从无共享过程并行性开始,直到您可以证明共享更多数据会有所帮助,然后再转向更复杂的共享所有线程并行性.

The restructuring for this purpose will exactly match the restructuring required to maximize thread concurrency. So. Start with shared-nothing process parallelism until you can prove that sharing more data would help, then move to the more complex shared-everything thread parallelism.

这篇关于如何在python中使用C扩展来解决GIL的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆