利用Py_NewInterpreter的分离GIL开发双核? [英] Exploiting Dual Core's with Py_NewInterpreter's separated GIL ?

查看:313
本文介绍了利用Py_NewInterpreter的分离GIL开发双核?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想以无摩擦的方式使用多个CPU内核来选择耗时的Python计算(包括numpy / scipy)。


进程间通信很繁琐且不在问题,所以我想在同一个过程中简单地使用更多的Python解释器实例(Py_NewInterpreter)和额外的GIL。

我希望能够在2之间直接推送Python Object-Trees(或者更多)口译员通过小心锁定。


任何希望通过?如果可能,主要危险是什么?该任务是否有示例/指南? - 使用ctypes左右。


或者是否有一个现成的Python模块,可以轻松设置和处理额外的Interpreter实例?

如果没有是否可以在Python std libs中创建这样的东西,以使Python多处理器就绪。我猜Python总会有一个GIL - 否则会在线程编程中失去很多安慰

robert

I''d like to use multiple CPU cores for selected time consuming Python computations (incl. numpy/scipy) in a frictionless manner.

Interprocess communication is tedious and out of question, so I thought about simply using a more Python interpreter instances (Py_NewInterpreter) with extra GIL in the same process.
I expect to be able to directly push around Python Object-Trees between the 2 (or more) interpreters by doing some careful locking.

Any hope to come through? If possible, what are the main dangers? Is there an example / guideline around for that task? - using ctypes or so.

Or is there even a ready made Python module which makes it easy to setup and deal with extra Interpreter instances?
If not, would it be an idea to create such thing in the Python std libs to make Python multi-processor-ready. I guess Python will always have a GIL - otherwise it would loose lots of comfort in threaded programming
robert

推荐答案

robert写道:
robert wrote:

我想使用多个CPU内核来选择耗时的Python

计算(包括numpy / scipy)一个没有摩擦的方式。


进程间通信很乏味而且毫无疑问,所以我认为

只是简单地使用更多的Python解释器实例

(Py_NewInterpreter)在同一进程中有额外的GIL。
I''d like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.

Interprocess communication is tedious and out of question, so I thought
about simply using a more Python interpreter instances
(Py_NewInterpreter) with extra GIL in the same process.



如果我理解Python / ceval.c,那么GIL实际上是全局的,而不是特定的

到解释器实例:

static PyThread_type_lock interpreter_lock = 0; / *这是GIL * /


Daniel

If I understand Python/ceval.c, the GIL is really global, not specific
to an interpreter instance:
static PyThread_type_lock interpreter_lock = 0; /* This is the GIL */

Daniel


功能请求:Py_NewInterpreter创建单独的GIL(分支)


Daniel Dittmar写道:
Feature Request: Py_NewInterpreter to create separate GIL (branch)

Daniel Dittmar wrote:

robert写道:
robert wrote:

>我想以无摩擦的方式使用多个CPU内核来选择耗时的Python计算(包括numpy / scipy)。

进程间通信很繁琐而且不可能,所以我想在同一个过程中只使用更多的Python解释器实例(Py_NewInterpreter)和额外的GIL。
>I''d like to use multiple CPU cores for selected time consuming Python
computations (incl. numpy/scipy) in a frictionless manner.

Interprocess communication is tedious and out of question, so I
thought about simply using a more Python interpreter instances
(Py_NewInterpreter) with extra GIL in the same process.



如果我理解Python / ceval.c,GIL实际上是全局的,而不是特定的

到解释器实例:

static PyThread_type_lock interpreter_lock = 0; / *这是GIL * /


If I understand Python/ceval.c, the GIL is really global, not specific
to an interpreter instance:
static PyThread_type_lock interpreter_lock = 0; /* This is the GIL */



这就是现在的show止动器。

ceval.c中只有一个完整的funcs使用那个非常全局的锁。其余的使用围绕线程状态的func。


下一个Python中是否有可能为每个Interpreter实例分别锁定。

因此:有* interpreter_lock在每个PyThreadState实例中是分开的,只有同一个Interpreter的线程有相同的GIL?

解释器之间的分离似乎已经足够了。 Interpreter主要在堆栈上运行。可能只有极少数全球C级资源需要单独的额外锁定。


早晚Python将不得不回答多处理器问题。

A每个解释器GIL和一个很好的模块,用于直接在一个进程内的解释器之间挖掘Python对象可能是正确边界线的答案?现有的扩展代码库将保持兼容,只要模块全局变量已经很好,这是常见的情况。


Robert

Thats the show stopper as of now.
There are only a handfull funcs in ceval.c to use that very global lock. The rest uses that funcs around thread states.

Would it be a possibilty in next Python to have the lock separate for each Interpreter instance.
Thus: have *interpreter_lock separate in each PyThreadState instance and only threads of same Interpreter have same GIL?
Separation between Interpreters seems to be enough. The Interpreter runs mainly on the stack. Possibly only very few global C-level resources would require individual extra locks.

Sooner or later Python will have to answer the multi-processor question.
A per-interpreter GIL and a nice module for tunneling Python-Objects directly between Interpreters inside one process might be the answer at the right border-line ? Existing extension code base would remain compatible, as far as there is already decent locking on module globals, which is the the usual case.

Robert


robert写道:
robert wrote:

我想在无摩擦的情况下使用多个CPU内核来选择耗时的Python计算(包括numpy / scipy)方式。


进程间通信很繁琐而且毫无疑问,所以我想在同一个进程中只使用更多的Python解释器实例(Py_NewInterpreter)和额外的GIL。

我希望能够通过一些小心锁定直接在2(或更多)解释器之间推送Python Object-Trees。
I''d like to use multiple CPU cores for selected time consuming Python computations (incl. numpy/scipy) in a frictionless manner.

Interprocess communication is tedious and out of question, so I thought about simply using a more Python interpreter instances (Py_NewInterpreter) with extra GIL in the same process.
I expect to be able to directly push around Python Object-Trees between the 2 (or more) interpreters by doing some careful locking.



我不想劝阻你,但引用计数/内存怎么样?b $ b管理共享对象?对我来说似乎并不好玩。

看一下IPython1及其并行计算能力[1,

2]。它被设计为在多个系统或单个系统上运行,具有多个CPU /多核的
。它的工作人员解释器(引擎)松散地耦合并且可以使用多个MPI模块,所以没有低级别的b $ b搞乱GIL。虽然它正在进行中,它已经看起来非常棒了。


[1] http://ipython.scipy.org/moin/Parallel_Computing

[2] http://ipython.scipy.org/moin/Parall...uting/Tutorial


fw

I don''t want to discourage you but what about reference counting/memory
management for shared objects? Doesn''t seem fun for me.
Take a look at IPython1 and it''s parallel computing capabilities [1,
2]. It is designed to run on multiple systems or a single system with
multiple CPU/multi-core. It''s worker interpreters (engines) are loosely
coupled and can utilize several MPI modules, so there is no low-level
messing with GIL. Although it is work in progress it already looks
quite awesome.

[1] http://ipython.scipy.org/moin/Parallel_Computing
[2] http://ipython.scipy.org/moin/Parall...uting/Tutorial

fw


这篇关于利用Py_NewInterpreter的分离GIL开发双核?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆