火花数据帧转换为大 pandas / R数据框要求 [英] Requirements for converting Spark dataframe to Pandas/R dataframe

查看:250
本文介绍了火花数据帧转换为大 pandas / R数据框要求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我上运行Hadoop的纱的火花。请问这个转换工作?是否在转换之前收集()发生?

I'm running Spark on Hadoop's YARN. How does this conversion work? Does a collect() take place before the conversion?

此外,我需要每一个从节点上安装Python和R的转换工作?我苦苦寻找本文档。

Also I need to install Python and R on every slave node for the conversion to work? I'm struggling to find documentation on this.

推荐答案

数据创建本地数据帧之前被收集。例如 toPandas 方法如下所示:

Data has to be collected before local data frame is created. For example toPandas method looks as follows:

def toPandas(self):
    import pandas as pd
    return pd.DataFrame.from_records(self.collect(), columns=self.columns)

您需要的Python,最佳所有的依赖关系,安装在每个节点上。

You need Python, optimally with all the dependencies, installed on each node.

SparkR对应( as.data.frame )仅仅是用于的别名收集

SparkR counterpart (as.data.frame) is simply an alias for collect.

要总结这两种情况下的数据是收集来驱动节点,并转换为本地数据结构( pandas.DataFrame ::基地data.frame )。

To summarize in both cases data is collected to the driver node and converted to the local data structure (pandas.DataFrame and base::data.frame in Python and R respectively).

这篇关于火花数据帧转换为大 pandas / R数据框要求的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆