将 100% 的内核与多处理模块一起使用 [英] Using 100% of all cores with the multiprocessing module

查看:23
本文介绍了将 100% 的内核与多处理模块一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两段代码用于了解 Python 3.1 中的多处理.我的目标是使用 100% 的所有可用处理器.但是,这里的代码片段在所有处理器上仅达到 30% - 50%.

I have two pieces of code that I'm using to learn about multiprocessing in Python 3.1. My goal is to use 100% of all the available processors. However, the code snippets here only reach 30% - 50% on all processors.

无论如何强制"python 100% 使用?操作系统(Windows 7、64 位)是否限制了 Python 对处理器的访问?当下面的代码片段正在运行时,我打开任务管理器并观察处理器的峰值,但从未达到并保持 100%.除此之外,我还可以看到在此过程中创建和销毁了多个 python.exe 进程.这些过程与处理器有什么关系?例如,如果我生成 4 个进程,则每个进程都没有使用它自己的核心.相反,这些进程使用的是什么?他们是否共享所有内核?如果是这样,是操作系统强制进程共享内核吗?

Is there anyway to 'force' python to use all 100%? Is the OS (windows 7, 64bit) limiting Python's access to the processors? While the code snippets below are running, I open the task manager and watch the processor's spike, but never reach and maintain 100%. In addition to that, I can see multiple python.exe processes created and destroyed along the way. How do these processes relate to processors? For example, if I spawn 4 processes, each process isn't using it's own core. Instead, what are the processes using? Are they sharing all cores? And if so, is it the OS that is forcing the processes to share the cores?

import multiprocessing

def worker():
    #worker function
    print ('Worker')
    x = 0
    while x < 1000:
        print(x)
        x += 1
    return

if __name__ == '__main__':
    jobs = []
    for i in range(50):
        p = multiprocessing.Process(target=worker)
        jobs.append(p)
        p.start()

代码片段 2

from multiprocessing import Process, Lock

def f(l, i):
    l.acquire()
    print('worker ', i)
    x = 0
    while x < 1000:
        print(x)
        x += 1
    l.release()

if __name__ == '__main__': 
    lock = Lock()
    for num in range(50):
        Process(target=f, args=(lock, num)).start()

推荐答案

要使用 100% 的所有内核,不要创建和销毁新进程.

To use 100% of all cores, do not create and destroy new processes.

为每个核心创建几个进程并将它们与管道链接.

Create a few processes per core and link them with a pipeline.

在操作系统级别,所有流水线进程同时运行.

At the OS-level, all pipelined processes run concurrently.

你写的越少(你委托给操作系统的越多)你就越有可能使用尽可能多的资源.

The less you write (and the more you delegate to the OS) the more likely you are to use as many resources as possible.

python p1.py | python p2.py | python p3.py | python p4.py ...

将最大限度地利用您的 CPU.

Will make maximal use of your CPU.

这篇关于将 100% 的内核与多处理模块一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆