多处理池不适用于嵌套函数 [英] multiprocessing pool not working in nested functions

查看:58
本文介绍了多处理池不适用于嵌套函数的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

以下代码未按预期执行.

Following code not executing as expected.

import multiprocessing

lock = multiprocessing.Lock()
def dummy():
    def log_results_l1(results):
        lock.acquire()
        print("Writing results", results)
        lock.release()

    def mp_execute_instance_l1(cmd):
        print(cmd)
        return cmd

    cmds = [x for x in range(10)]

    pool = multiprocessing.Pool(processes=8)

    for c in cmds:
        pool.apply_async(mp_execute_instance_l1, args=(c, ), callback=log_results_l1)

    pool.close()
    pool.join()
    print("done")


dummy()

但如果函数不是嵌套的,它确实有效.这是怎么回事.

But it does work if the functions are not nested. What is going on.

推荐答案

multiprocessing.Pool 方法,例如 apply**map*方法必须同时腌制函数和参数.函数由它们的限定名称腌制;本质上,在 unpickling 时,另一个进程需要能够导入定义它们的模块并执行 getattr 调用以找到有问题的函数.嵌套函数在定义它们的函数之外无法按名称使用,因此酸洗失败.当您将函数移动到全局作用域时,您就解决了这个问题,这就是为什么当您这样做时它会起作用.

multiprocessing.Pool methods like the apply* and *map* methods have to pickle both the function and the arguments. Functions are pickled by their qualified name; essentially, on unpickling, the other process needs to be able to import the module they were defined in and do a getattr call to find the function in question. Nested functions aren't available by name outside the function they were defined in, so pickling fails. When you move the function to global scope, you fix this, which is why it works when you do that.

这篇关于多处理池不适用于嵌套函数的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆