Pyspark 性能:dataframe.collect() 很慢 [英] Pyspark performance: dataframe.collect() is very slow

查看:62
本文介绍了Pyspark 性能:dataframe.collect() 很慢的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当我尝试在数据帧上进行收集时,似乎需要很长时间.

When I try to make a collect on a dataframe it seems to take too long.

我想从数据框中收集数据以将其转换为字典并将其插入到 documentdb 中.但是执行day_rows.collect() 时,性能似乎很慢

I want to collect data from a dataframe to transform it into a dictionary and insert it into documentdb. But the performance seems to be very slow when the day_rows.collect() is performed

day_rows = self._sc.sql("select * from table")

rows_collect = []

if day_rows.count():
    rows_collect = day_rows.collect()

results = map(lambda row: row.asDict(), rows_collect) 

为什么性能很慢?

推荐答案

在 .collect() 之前缓存您的数据帧.这将大大提高性能.

Cache your dataframe, before .collect(). This will increase the performance by magnitudes.

df.persist() 或 df.cache()

df.persist() or df.cache()

一旦你完成了使用,你就可以不坚持了.

Once you are done with the usage, you can always unpersist.

这篇关于Pyspark 性能:dataframe.collect() 很慢的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆