从大型Pyspark数据帧创建字典,显示OutOfMemoryError:Java堆空间 [英] Creating dictionary from large Pyspark dataframe showing OutOfMemoryError: Java heap space
问题描述
我已经看到并尝试了许多现有的关于此问题的StackOverflow帖子,但无济于事.我猜我的JAVA堆空间没有我的大型数据集那么大,我的数据集包含650万行.我的Linux实例包含具有4个内核的64GB Ram .根据此建议,我需要修复我的代码,但我认为从pyspark dataframe制作字典应该不是很昂贵.如果有其他计算方法,请告知我.
I have seen and tried many existing StackOverflow posts regarding this issue but none work. I guess my JAVA heap space is not as large as expected for my large dataset, My dataset contains 6.5M rows. My Linux instance contains 64GB Ram with 4 cores. As per this suggestion I need to fix my code but I think making a dictionary from pyspark dataframe should not be very costly. Please advise me if any other way to compute that.
我只想从pyspark数据框中创建一个python字典,这是我pyspark数据框中的内容,
I just want to make a python dictionary from my pyspark dataframe, this is the content of my pyspark dataframe,
property_sql_df.show()
显示,
+--------------+------------+--------------------+--------------------+
| id|country_code| name| hash_of_cc_pn_li|
+--------------+------------+--------------------+--------------------+
| BOND-9129450| US|Scotron Home w/Ga...|90cb0946cf4139e12...|
| BOND-1742850| US|Sited in the Mead...|d5c301f00e9966483...|
| BOND-3211356| US|NEW LISTING - Com...|811fa26e240d726ec...|
| BOND-7630290| US|EC277- 9 Bedroom ...|d5c301f00e9966483...|
| BOND-7175508| US|East Hampton Retr...|90cb0946cf4139e12...|
+--------------+------------+--------------------+--------------------+
我想要做的是用hash_of_cc_pn_li作为键和id作为列表值的字典.
What I want is to make a dictionary with hash_of_cc_pn_li as key and id as a list value.
预期产量
{
"90cb0946cf4139e12": ["BOND-9129450", "BOND-7175508"]
"d5c301f00e9966483": ["BOND-1742850","BOND-7630290"]
}
到目前为止我尝试过的事情
方法1:导致java.lang.OutOfMemoryError:Java堆空间
Way 1: causing java.lang.OutOfMemoryError: Java heap space
%%time
duplicate_property_list = {}
for ind in property_sql_df.collect():
hashed_value = ind.hash_of_cc_pn_li
property_id = ind.id
if hashed_value in duplicate_property_list:
duplicate_property_list[hashed_value].append(property_id)
else:
duplicate_property_list[hashed_value] = [property_id]
方法2:不起作用,因为在pyspark上缺少本机偏移量
Way 2: Not working because of missing native OFFSET on pyspark
%%time
i = 0
limit = 1000000
for offset in range(0, total_record,limit):
i = i + 1
if i != 1:
offset = offset + 1
duplicate_property_list = {}
duplicate_properties = {}
# Preparing dataframe
url = '''select id, hash_of_cc_pn_li from properties_df LIMIT {} OFFSET {}'''.format(limit,offset)
properties_sql_df = spark.sql(url)
# Grouping dataset
rows = properties_sql_df.groupBy("hash_of_cc_pn_li").agg(F.collect_set("id").alias("ids")).collect()
duplicate_property_list = { row.hash_of_cc_pn_li: row.ids for row in rows }
# Filter a dictionary to keep elements only where duplicate cound
duplicate_properties = filterTheDict(duplicate_property_list, lambda elem : len(elem[1]) >=2)
# Writing to file
with open('duplicate_detected/duplicate_property_list_all_'+str(i)+'.json', 'w') as fp:
json.dump(duplicate_property_list, fp)
我现在在控制台上得到的内容:
java.lang.OutOfMemoryError:Java堆空间
java.lang.OutOfMemoryError: Java heap space
,并在 Jupyter笔记本输出
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server (127.0.0.1:33097)