多进程与多线程Python花费的时间 [英] Multiprocess vs Multithread Python time taken

查看:92
本文介绍了多进程与多线程Python花费的时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有2个简单的函数(可在一定范围内循环),而没有任何依赖关系..我正在尝试同时使用Python多处理模块和多线程模块来运行这2个函数.

比较输出时,我发现多进程应用程序比多线程模块花费了1秒多的时间.

由于全局解释器锁,我读多线程效率不高.

基于以上陈述-
1.如果两个进程之间没有依赖性,最好使用多重处理吗?
2.如何计算可以在我的机器上运行以最大化效率的进程/线程数. 3.另外,还有一种方法可以通过使用多线程来计算程序的效率...

多线程模块...

from multiprocessing import Process

import thread
import platform

import os
import time
import threading
class Thread1(threading.Thread):
    def __init__(self,threadindicator):
        threading.Thread.__init__(self)
        self.threadind = threadindicator

    def run(self):
        starttime = time.time() 
        if self.threadind == 'A':
            process1()
        else:
            process2()
        endtime = time.time()
        print 'Thread 1 complete : Time Taken = ', endtime - starttime

def process1():
    starttime = time.time() 
    for i in range(100000):
        for j in range(10000):
            pass        
    endtime = time.time() 

def process2():
    for i in range(1000):
        for j in range(1000):
            pass

def main():

    print 'Main Thread'
    starttime = time.time()
    thread1 = Thread1('A')
    thread2 = Thread1('B')
    thread1.start()
    thread2.start()
    threads = []
    threads.append(thread1)
    threads.append(thread2)

    for t in threads:
        t.join()
    endtime = time.time()
    print 'Main Thread Complete , Total Time Taken = ', endtime - starttime


if __name__ == '__main__':
    main()

多进程模块

from multiprocessing import Process
import platform

import os
import time

def process1():
#     print 'process_1 processor =',platform.processor()
    starttime = time.time() 
    for i in range(100000):
        for j in range(10000):
            pass
    endtime = time.time()
    print 'Process 1 complete : Time Taken = ', endtime - starttime 


def process2():
#     print 'process_2 processor =',platform.processor()
    starttime = time.time()
    for i in range(1000):
        for j in range(1000):
            pass
    endtime = time.time()
    print 'Process 2 complete : Time Taken = ', endtime - starttime

def main():
    print 'Main Process start'
    starttime = time.time()
    processlist = []

    p1 = Process(target=process1)
    p1.start()
    processlist.append(p1)

    p2 = Process(target = process2)
    p2.start()
    processlist.append(p2)

    for i in processlist:
        i.join()
    endtime = time.time()
    print 'Main Process Complete - Total time taken = ', endtime - starttime

if __name__ == '__main__':
    main()

解决方案

如果您的计算机上有两个可用的CPU,则有两个不需要通信的进程,并且您想同时使用这两个进程程序运行速度更快,您应该使用多处理模块,而不是线程模块.

全局解释器锁(GIL)阻止Python解释器通过使用多个线程来有效利用多个CPU,因为一次只能有一个线程执行Python字节码.因此,多线程不会改善应用程序的整体运行时间,除非您有长时间阻塞的调用(例如等待IO)或释放GIL(例如numpy会进行一些昂贵的调用).但是,多处理库创建了单独的子进程,因此创建了解释器的多个副本,因此可以有效利用多个CPU.

但是,在您给出的示例中,您有一个过程非常快地完成(在我的计算机上不到0.1秒),而另一个过程大约需要18秒才能完成.确切的数字可能会有所不同,具体取决于您的硬件.在这种情况下,几乎所有工作都是在一个进程中进行的,因此无论您实际上只使用一个CPU.在这种情况下,产生进程与线程的开销增加,可能导致基于进程的版本变慢.

如果两个进程都执行18秒的嵌套循环,则应该看到多处理代码运行得更快(假设您的计算机实际上有多个CPU).在我的机器上,我看到多处理代码在大约18.5秒内完成,而多线程代码在71.5秒内完成.我不确定为什么多线程程序花了超过36秒的时间,但是我猜是GIL引起了某种线程争用问题,这减慢了两个线程的执行速度.

关于第二个问题,假设系统上没有其他负载,则应使用与系统上CPU数量相等的进程数.您可以通过在Linux系统上执行lscpu,在Mac系统上进行sysctl hw.ncpu或在Windows上的运行"对话框中运行dxdiag来发现这一点(可能还有其他方法,但这是我一直这样做的方式). /p>

对于第三个问题,最简单的方法是从额外的过程中获得多少效率,就是像往常一样使用time.time()time实用工具来测量程序的总运行时间.在Linux中(例如time python myprog.py).理想的加速应该等于您正在使用的进程数,因此,假设您获得了最大的速度,那么在4个CPU上运行的4进程程序最多应比具有1个进程的同一程序快4倍.从额外的过程中受益.如果其他流程对您的帮助不大,则将少于4倍.

I have 2 simple functions(loops over a range) that can run separately without any dependency.. I'm trying to run this 2 functions both using the Python multiprocessing module as well as multithreading module..

When I compared the output, I see the multiprocess application takes 1 second more than the multi-threading module..

I read multi-threading is not that efficient because of the Global interpreter lock...

Based on the above statements -
1. Is is best to use the multiprocessing if there is no dependency between 2 processes?
2. How to calculate the number of processes/threads that I can run in my machine for maximum efficiency..
3. Also, is there a way to calculate the efficiency of the program by using multithreading...

Multithread module...

from multiprocessing import Process

import thread
import platform

import os
import time
import threading
class Thread1(threading.Thread):
    def __init__(self,threadindicator):
        threading.Thread.__init__(self)
        self.threadind = threadindicator

    def run(self):
        starttime = time.time() 
        if self.threadind == 'A':
            process1()
        else:
            process2()
        endtime = time.time()
        print 'Thread 1 complete : Time Taken = ', endtime - starttime

def process1():
    starttime = time.time() 
    for i in range(100000):
        for j in range(10000):
            pass        
    endtime = time.time() 

def process2():
    for i in range(1000):
        for j in range(1000):
            pass

def main():

    print 'Main Thread'
    starttime = time.time()
    thread1 = Thread1('A')
    thread2 = Thread1('B')
    thread1.start()
    thread2.start()
    threads = []
    threads.append(thread1)
    threads.append(thread2)

    for t in threads:
        t.join()
    endtime = time.time()
    print 'Main Thread Complete , Total Time Taken = ', endtime - starttime


if __name__ == '__main__':
    main()

multiprocess module

from multiprocessing import Process
import platform

import os
import time

def process1():
#     print 'process_1 processor =',platform.processor()
    starttime = time.time() 
    for i in range(100000):
        for j in range(10000):
            pass
    endtime = time.time()
    print 'Process 1 complete : Time Taken = ', endtime - starttime 


def process2():
#     print 'process_2 processor =',platform.processor()
    starttime = time.time()
    for i in range(1000):
        for j in range(1000):
            pass
    endtime = time.time()
    print 'Process 2 complete : Time Taken = ', endtime - starttime

def main():
    print 'Main Process start'
    starttime = time.time()
    processlist = []

    p1 = Process(target=process1)
    p1.start()
    processlist.append(p1)

    p2 = Process(target = process2)
    p2.start()
    processlist.append(p2)

    for i in processlist:
        i.join()
    endtime = time.time()
    print 'Main Process Complete - Total time taken = ', endtime - starttime

if __name__ == '__main__':
    main()

解决方案

If you have two CPUs available on your machine, you have two processes which don't have to communicate, and you want to use both of them to make your program faster, you should use the multiprocessing module, rather than the threading module.

The Global Interpreter Lock (GIL) prevents the Python interpreter from making efficient use of more than one CPU by using multiple threads, because only one thread can be executing Python bytecode at a time. Therefore, multithreading won't improve the overall runtime of your application unless you have calls that are blocking (e.g. waiting for IO) or that release the GIL (e.g. numpy will do this for some expensive calls) for extended periods of time. However, the multiprocessing library creates separate subprocesses, and therefore several copies of the interpreter, so it can make efficient use of multiple CPUs.

However, in the example you gave, you have one process that finishes very quickly (less than 0.1 seconds on my machine) and one process that takes around 18 seconds to finish on the other. The exact numbers may vary depending on your hardware. In that case, nearly all the work is happening in one process, so you're really only using one CPU regardless. In this case, the increased overhead of spawning processes vs threads is probably causing the process-based version to be slower.

If you make both processes do the 18 second nested loops, you should see that the multiprocessing code goes much faster (assuming your machine actually has more than one CPU). On my machine, I saw the multiprocessing code finish in around 18.5 seconds, and the multithreaded code finish in 71.5 seconds. I'm not sure why the multithreaded one took longer than around 36 seconds, but my guess is the GIL is causing some sort of thread contention issue which is slowing down both threads from executing.

As for your second question, assuming there's no other load on the system, you should use a number of processes equal to the number of CPUs on your system. You can discover this by doing lscpu on a Linux system, sysctl hw.ncpu on a Mac system, or running dxdiag from the Run dialog on Windows (there's probably other ways, but this is how I always do it).

For the third question, the simplest way to figure out how much efficiency you're getting from the extra processes is just to measure the total runtime of your program, using time.time() as you were, or the time utility in Linux (e.g. time python myprog.py). The ideal speedup should be equal to the number of processes you're using, so a 4 process program running on 4 CPUs should be at most 4x faster than the same program with 1 process, assuming you get maximum benefit from the extra processes. If the other processes aren't helping you that much, it will be less than 4x.

这篇关于多进程与多线程Python花费的时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆