将100%的内核与多处理模块一起使用 [英] Using 100% of all cores with the multiprocessing module

查看:51
本文介绍了将100%的内核与多处理模块一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两段代码用于学习Python 3.1中的多处理.我的目标是使用所有可用处理器中的100%. 但是,此处的代码段在所有处理器上仅达到30%-50%.

I have two pieces of code that I'm using to learn about multiprocessing in Python 3.1. My goal is to use 100% of all the available processors. However, the code snippets here only reach 30% - 50% on all processors.

总有办法强制" python使用全部100%吗? 操作系统(Windows 7、64位)是否限制了Python对处理器的访问? 当下面的代码段运行时,我打开任务管理器并观察处理器的峰值,但从未达到并维持100%. 除此之外,我还可以看到在此过程中创建和销毁了多个python.exe进程.这些过程与处理器有何关系?例如,如果我生成4个进程,则每个进程都没有使用它自己的核心.取而代之的是,这些过程在使用什么?他们共享所有核心吗?如果是这样,是不是操作系统迫使进程共享内核?

Is there anyway to 'force' python to use all 100%? Is the OS (windows 7, 64bit) limiting Python's access to the processors? While the code snippets below are running, I open the task manager and watch the processor's spike, but never reach and maintain 100%. In addition to that, I can see multiple python.exe processes created and destroyed along the way. How do these processes relate to processors? For example, if I spawn 4 processes, each process isn't using it's own core. Instead, what are the processes using? Are they sharing all cores? And if so, is it the OS that is forcing the processes to share the cores?

import multiprocessing

def worker():
    #worker function
    print ('Worker')
    x = 0
    while x < 1000:
        print(x)
        x += 1
    return

if __name__ == '__main__':
    jobs = []
    for i in range(50):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

代码段2

from multiprocessing import Process, Lock

def f(l, i):
    l.acquire()
    print('worker ', i)
    x = 0
    while x < 1000:
        print(x)
        x += 1
    l.release()

if __name__ == '__main__': 
    lock = Lock()
    for num in range(50):
        Process(target=f, args=(lock, num)).start()

推荐答案

要使用所有内核的100%,请勿创建和销毁新进程.

To use 100% of all cores, do not create and destroy new processes.

为每个核心创建一些进程,并将其与管道链接.

Create a few processes per core and link them with a pipeline.

在操作系统级别,所有流水线进程同时运行.

At the OS-level, all pipelined processes run concurrently.

您写的越少(越多委派给操作系统),您越有可能使用尽可能多的资源.

The less you write (and the more you delegate to the OS) the more likely you are to use as many resources as possible.

python p1.py | python p2.py | python p3.py | python p4.py ...

将最大限度地利用您的CPU.

Will make maximal use of your CPU.

这篇关于将100%的内核与多处理模块一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆