使用 PySpark 在 HDFS 中保存和附加文件 [英] Save and append a file in HDFS using PySpark
问题描述
我在 PySpark 中有一个名为 df
的数据框.我已将此 df
注册为 temptable
,如下所示.
I have a data frame in PySpark called df
. I have registered this df
as a temptable
like below.
df.registerTempTable('mytempTable')
date=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
现在从这个临时表中我将获得某些值,例如列 id
Now from this temp table I will get certain values, like max_id of a column id
min_id = sqlContext.sql("select nvl(min(id),0) as minval from mytempTable").collect()[0].asDict()['minval']
max_id = sqlContext.sql("select nvl(max(id),0) as maxval from mytempTable").collect()[0].asDict()['maxval']
现在我将收集所有这些值,如下所示.
Now I will collect all these values like below.
test = ("{},{},{}".format(date,min_id,max_id))
我发现test
不是data frame
而是str
字符串
>>> type(test)
<type 'str'>
现在我想将此 test
保存为 HDFS
中的文件.我还想将数据附加到 hdfs
中的同一个文件中.
Now I want save this test
as a file in HDFS
. I would also like to append data to the same file in hdfs
.
如何使用 PySpark 做到这一点?
How can I do that using PySpark?
仅供参考,我使用的是 Spark 1.6,但无权访问 Databricks spark-csv
包.
FYI I am using Spark 1.6 and don't have access to Databricks spark-csv
package.
推荐答案
你开始吧,你只需要用 concat_ws
连接你的数据并将其作为文本进行调整:
Here you go, you'll just need to concat your data with concat_ws
and right it as a text:
query = """select concat_ws(',', date, nvl(min(id), 0), nvl(max(id), 0))
from mytempTable"""
sqlContext.sql(query).write("text").mode("append").save("/tmp/fooo")
或者更好的选择:
from pyspark.sql import functions as f
(sqlContext
.table("myTempTable")
.select(f.concat_ws(",", f.first(f.lit(date)), f.min("id"), f.max("id")))
.coalesce(1)
.write.format("text").mode("append").save("/tmp/fooo"))
这篇关于使用 PySpark 在 HDFS 中保存和附加文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!