使用pyspark从元组列表创建DataFrame [英] Create DataFrame from list of tuples using pyspark
问题描述
我正在使用使用simple-salesforce软件包从SFDC提取的数据. 我正在使用Python3编写脚本和Spark 1.5.2.
I am working with data extracted from SFDC using simple-salesforce package. I am using Python3 for scripting and Spark 1.5.2.
我创建了一个包含以下数据的rdd:
I created an rdd containing the following data:
[('Id', 'a0w1a0000003xB1A'), ('PackSize', 1.0), ('Name', 'A')]
[('Id', 'a0w1a0000003xAAI'), ('PackSize', 1.0), ('Name', 'B')]
[('Id', 'a0w1a00000xB3AAI'), ('PackSize', 30.0), ('Name', 'C')]
...
此数据在名为v_rdd的RDD中
This data is in RDD called v_rdd
我的模式如下:
StructType(List(StructField(Id,StringType,true),StructField(PackSize,StringType,true),StructField(Name,StringType,true)))
我正在尝试根据此RDD创建DataFrame:
I am trying to create DataFrame out of this RDD:
sqlDataFrame = sqlContext.createDataFrame(v_rdd, schema)
我打印我的DataFrame:
I print my DataFrame:
sqlDataFrame.printSchema()
并获得以下信息:
+--------------------+--------------------+--------------------+
| Id| PackSize| Name|
+--------------------+--------------------+--------------------+
|[Ljava.lang.Objec...|[Ljava.lang.Objec...|[Ljava.lang.Objec...|
|[Ljava.lang.Objec...|[Ljava.lang.Objec...|[Ljava.lang.Objec...|
|[Ljava.lang.Objec...|[Ljava.lang.Objec...|[Ljava.lang.Objec...|
我希望看到这样的实际数据:
I am expecting to see actual data, like this:
+------------------+------------------+--------------------+
| Id|PackSize| Name|
+------------------+------------------+--------------------+
|a0w1a0000003xB1A | 1.0| A |
|a0w1a0000003xAAI | 1.0| B |
|a0w1a00000xB3AAI | 30.0| C |
能否请您帮助我确定我在这里做错了什么.
Can you please help me identify what I am doing wrong here.
我的Python脚本很长,我不确定人们浏览它会不会很方便,所以我只发布了我遇到问题的部分.
My Python script is long, I am not sure it would be convenient for people to sift through it, so I posted only parts I am having issue with.
提前感谢一吨!
推荐答案
嘿,下次您可以提供一个有效的示例.这样会更容易.
Hey could you next time provide a working example. That would be easier.
RDD的呈现方式基本上很奇怪,无法创建DataFrame.这是根据Spark文档创建DF的方式.
The way how your RDD is presented is basically weird to create a DataFrame. This is how you create a DF according to Spark Documentation.
>>> l = [('Alice', 1)]
>>> sqlContext.createDataFrame(l).collect()
[Row(_1=u'Alice', _2=1)]
>>> sqlContext.createDataFrame(l, ['name', 'age']).collect()
[Row(name=u'Alice', age=1)]
因此,对于您的示例,您可以按照以下方式创建所需的输出:
So concerning your example you can create your desired output like this way:
# Your data at the moment
data = sc.parallelize([
[('Id', 'a0w1a0000003xB1A'), ('PackSize', 1.0), ('Name', 'A')],
[('Id', 'a0w1a0000003xAAI'), ('PackSize', 1.0), ('Name', 'B')],
[('Id', 'a0w1a00000xB3AAI'), ('PackSize', 30.0), ('Name', 'C')]
])
# Convert to tuple
data_converted = data.map(lambda x: (x[0][1], x[1][1], x[2][1]))
# Define schema
schema = StructType([
StructField("Id", StringType(), True),
StructField("Packsize", StringType(), True),
StructField("Name", StringType(), True)
])
# Create dataframe
DF = sqlContext.createDataFrame(data_converted, schema)
# Output
DF.show()
+----------------+--------+----+
| Id|Packsize|Name|
+----------------+--------+----+
|a0w1a0000003xB1A| 1.0| A|
|a0w1a0000003xAAI| 1.0| B|
|a0w1a00000xB3AAI| 30.0| C|
+----------------+--------+----+
希望这会有所帮助
这篇关于使用pyspark从元组列表创建DataFrame的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!