从大型Pyspark数据帧创建字典,显示OutOfMemoryError:Java堆空间 [英] Creating dictionary from large Pyspark dataframe showing OutOfMemoryError: Java heap space

查看:86
本文介绍了从大型Pyspark数据帧创建字典,显示OutOfMemoryError:Java堆空间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经看到并尝试了许多现有的关于此问题的StackOverflow帖子,但无济于事.我猜我的JAVA堆空间没有我的大型数据集那么大,我的数据集包含650万行.我的Linux实例包含具有4个内核的64GB Ram .根据此建议,我需要修复我的代码,但我认为从pyspark dataframe制作字典应该不是很昂贵.如果有其他计算方法,请告知我.

I have seen and tried many existing StackOverflow posts regarding this issue but none work. I guess my JAVA heap space is not as large as expected for my large dataset, My dataset contains 6.5M rows. My Linux instance contains 64GB Ram with 4 cores. As per this suggestion I need to fix my code but I think making a dictionary from pyspark dataframe should not be very costly. Please advise me if any other way to compute that.

我只想从pyspark数据框中创建一个python字典,这是我pyspark数据框中的内容,

I just want to make a python dictionary from my pyspark dataframe, this is the content of my pyspark dataframe,

property_sql_df.show()显示,

+--------------+------------+--------------------+--------------------+
|            id|country_code|       name|          hash_of_cc_pn_li|
+--------------+------------+--------------------+--------------------+
|  BOND-9129450|          US|Scotron Home w/Ga...|90cb0946cf4139e12...|
|  BOND-1742850|          US|Sited in the Mead...|d5c301f00e9966483...|
|  BOND-3211356|          US|NEW LISTING - Com...|811fa26e240d726ec...|
|  BOND-7630290|          US|EC277- 9 Bedroom ...|d5c301f00e9966483...|
|  BOND-7175508|          US|East Hampton Retr...|90cb0946cf4139e12...|
+--------------+------------+--------------------+--------------------+

我想要做的是用hash_of_cc_pn_li作为和id作为列表值的字典.

What I want is to make a dictionary with hash_of_cc_pn_li as key and id as a list value.

预期产量

{
  "90cb0946cf4139e12": ["BOND-9129450", "BOND-7175508"]
  "d5c301f00e9966483": ["BOND-1742850","BOND-7630290"]
}

到目前为止我尝试过的事情

方法1:导致java.lang.OutOfMemoryError:Java堆空间

Way 1: causing java.lang.OutOfMemoryError: Java heap space

%%time
duplicate_property_list = {}
for ind in property_sql_df.collect(): 
     hashed_value = ind.hash_of_cc_pn_li
     property_id = ind.id
     if hashed_value in duplicate_property_list:
         duplicate_property_list[hashed_value].append(property_id) 
     else:
         duplicate_property_list[hashed_value] = [property_id] 

方法2:不起作用,因为在pyspark上缺少本机偏移量

Way 2: Not working because of missing native OFFSET on pyspark

%%time
i = 0
limit = 1000000
for offset in range(0, total_record,limit):
    i = i + 1
    if i != 1:
        offset = offset + 1
        
    duplicate_property_list = {}
    duplicate_properties = {}
    
    # Preparing dataframe
    url = '''select id, hash_of_cc_pn_li from properties_df LIMIT {} OFFSET {}'''.format(limit,offset)  
    properties_sql_df = spark.sql(url)
    
    # Grouping dataset
    rows = properties_sql_df.groupBy("hash_of_cc_pn_li").agg(F.collect_set("id").alias("ids")).collect()
    duplicate_property_list = { row.hash_of_cc_pn_li: row.ids for row in rows }
    
    # Filter a dictionary to keep elements only where duplicate cound
    duplicate_properties = filterTheDict(duplicate_property_list, lambda elem : len(elem[1]) >=2)
    
    # Writing to file
    with open('duplicate_detected/duplicate_property_list_all_'+str(i)+'.json', 'w') as fp:
        json.dump(duplicate_property_list, fp)

我现在在控制台上得到的内容:

java.lang.OutOfMemoryError:Java堆空间

java.lang.OutOfMemoryError: Java heap space

,并在 Jupyter笔记本输出

ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server (127.0.0.1:33097)

这是我在这里提出的后续问题: 推荐答案

为什么不在执行程序中保留尽可能多的数据和处理,而不是收集给驱动程序?如果我正确理解这一点,则可以使用 pyspark 转换和聚合并将其直接保存到JSON,从而利用执行程序,然后将该JSON文件(可能已分区)作为字典加载回Python.诚然,您会引入IO开销,但这应该可以避免OOM堆空间错误.分步操作:

Why not keep as much data and processing in Executors, rather than collecting to Driver? If I understand this correctly, you could use pyspark transformations and aggregations and save directly to JSON, therefore leveraging executors, then load that JSON file (likely partitioned) back into Python as a dictionary. Admittedly, you introduce IO overhead, but this should allow you to get around your OOM heap space errors. Step-by-step:

import pyspark.sql.functions as f


spark = SparkSession.builder.getOrCreate()
data = [
    ("BOND-9129450", "90cb"),
    ("BOND-1742850", "d5c3"),
    ("BOND-3211356", "811f"),
    ("BOND-7630290", "d5c3"),
    ("BOND-7175508", "90cb"),
]
df = spark.createDataFrame(data, ["id", "hash_of_cc_pn_li"])

df.groupBy(
    f.col("hash_of_cc_pn_li"),
).agg(
    f.collect_set("id").alias("id")  # use f.collect_list() here if you're not interested in deduplication of BOND-XXXXX values
).write.json("./test.json")

检查输出路径:

ls -l ./test.json

-rw-r--r-- 1 jovyan users  0 Jul 27 08:29 part-00000-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 50 Jul 27 08:29 part-00039-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 65 Jul 27 08:29 part-00043-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users 65 Jul 27 08:29 part-00159-1fb900a1-c624-4379-a652-8e5b9dee8651-c000.json
-rw-r--r-- 1 jovyan users  0 Jul 27 08:29 _SUCCESS
_SUCCESS

作为 dict 加载到Python:

Loading to Python as dict:

import json
from glob import glob

data = []
for file_name in glob('./test.json/*.json'):
    with open(file_name) as f:
        try:
            data.append(json.load(f))
        except json.JSONDecodeError:  # there is definitely a better way - this is here because some partitions might be empty
            pass

最后

{item['hash_of_cc_pn_li']:item['id'] for item in data}

{'d5c3': ['BOND-7630290', 'BOND-1742850'],
 '811f': ['BOND-3211356'],
 '90cb': ['BOND-9129450', 'BOND-7175508']}

我希望这会有所帮助!谢谢您提出的好问题!

I hope this helps! Thank you for the good question!

这篇关于从大型Pyspark数据帧创建字典,显示OutOfMemoryError:Java堆空间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆