在pyspark中将RDD转换为Dataframe [英] Convert RDD into Dataframe in pyspark

查看:246
本文介绍了在pyspark中将RDD转换为Dataframe的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试将我的RDD转换为pyspark中的Dataframe。

I am trying to convert my RDD into Dataframe in pyspark.

我的RDD:

[(['abc', '1,2'], 0), (['def', '4,6,7'], 1)]

我想要以数据框的形式进行RDD:

I want the RDD in the form of a Dataframe:

Index Name Number
 0    abc   [1,2]
 1    def   [4,6,7]

我尝试过:

rd2=rd.map(lambda x,y: (y, x[0] , x[1]) ).toDF(["Index", "Name" , "Number"])

但是我遇到错误

 An error occurred while calling 
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 62.0 failed 1 times, most recent failure: Lost task 0.0 
in stage 62.0 (TID 88, localhost, executor driver): 
org.apache.spark.api.python.PythonException: Traceback (most recent 
call last):

您能告诉我,我哪里出问题了吗?

Can you let me know, where am I going wrong?

更新:

rd2=rd.map(lambda x: (x[1], x[0][0] , x[0][1]))

我有以下形式的RDD:

I have the RDD in the form :

[(0, 'abc', '1,2'), (1, 'def', '4,6,7')]

要转换为数据框:

rd2.toDF(["Index", "Name" , "Number"])

它仍然给我错误:

An error occurred while calling o2271.showString.
: java.lang.IllegalStateException: SparkContext has been shutdown
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2021)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2050)


推荐答案

RDD.map 具有一元函数:

rdd.map(lambda x: (x[1], x[0][0] , x[0][1])).toDF(["Index", "Name" , "Number"])

所以您不能传递二进制数。

so you cannot pass binary one.

如果要拆分数组,则:

rdd.map(lambda x: (x[1], x[0][0] , x[0][1].split(","))).toDF(["Index", "Name" , "Number"])

这篇关于在pyspark中将RDD转换为Dataframe的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆