如何在进程之间共享 pandas DataFrame 对象? [英] How to share pandas DataFrame object between processes?

查看:61
本文介绍了如何在进程之间共享 pandas DataFrame 对象?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

这个问题与我之前发布的链接有相同的点.

This question has the same point of the link that I posted before.

( 有没有避免内存深拷贝或减少多处理时间的好方法?)

因为我遇到了DataFrame"对象共享问题,所以我对此一无所知.

I'm getting nowhere with that since I faced the 'DataFrame' object sharing problem.

我简化了示例代码.

如果有任何专业人士修改我的代码以在没有 Manager.list、Manager.dict、numpy sharedmem 的进程之间共享DataFrame"对象,我会非常感谢她或他.

If there any professional to revise my code to share 'DataFrame' object between processes without Manager.list, Manager.dict, numpy sharedmem, I will very appreciate to her or him.

这是代码.

#-*- coding: UTF-8 -*-'
import pandas as pd
import numpy as np
from multiprocessing import *
import multiprocessing.sharedctypes as sharedctypes
import ctypes

def add_new_derived_column(shared_df_obj):
    shared_df_obj.value['new_column']=shared_df_obj.value['A']+shared_df_obj.value['B'] / 2
    print shared_df_obj.value.head()
    '''
    "new_column" Generated!!!

          A         B  new_column
0 -0.545815 -0.179209   -0.635419
1  0.654273 -2.015285   -0.353370
2  0.865932 -0.943028    0.394418
3 -0.850136  0.464778   -0.617747
4 -1.077967 -1.127802   -1.641868
    '''

if __name__ == "__main__":

    dataframe = pd.DataFrame(np.random.randn(100000, 2), columns=['A', 'B'])

    # to shared DataFrame object, I use sharedctypes.RawValue
    shared_df_obj=sharedctypes.RawValue(ctypes.py_object, dataframe )

    # then I pass the "shared_df_obj" to Mulitiprocessing.Process object
    process=Process(target=add_new_derived_column, args=(shared_df_obj,))
    process.start()
    process.join()

    print shared_df_obj.value.head()
    '''
    "new_column" disappeared.
    the DataFrame object isn't shared.

          A         B
0 -0.545815 -0.179209
1  0.654273 -2.015285
2  0.865932 -0.943028
3 -0.850136  0.464778
4 -1.077967 -1.127802
    '''

推荐答案

您可以使用命名空间管理器,以下代码按您的预期工作.

You can use a Namespace Manager, the following code works as you expect.

#-*- coding: UTF-8 -*-'
import pandas as pd
import numpy as np
from multiprocessing import *
import multiprocessing.sharedctypes as sharedctypes
import ctypes

def add_new_derived_column(ns):
    dataframe2 = ns.df
    dataframe2['new_column']=dataframe2['A']+dataframe2['B'] / 2
    print (dataframe2.head())
    ns.df = dataframe2

if __name__ == "__main__":

    mgr = Manager()
    ns = mgr.Namespace()

    dataframe = pd.DataFrame(np.random.randn(100000, 2), columns=['A', 'B'])
    ns.df = dataframe
    print (dataframe.head())

    # then I pass the "shared_df_obj" to Mulitiprocessing.Process object
    process=Process(target=add_new_derived_column, args=(ns,))
    process.start()
    process.join()

    print (ns.df.head())

这篇关于如何在进程之间共享 pandas DataFrame 对象?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆