如何在 Pyspark 中按列连接/附加多个 Spark 数据帧? [英] How to concatenate/append multiple Spark dataframes column wise in Pyspark?
本文介绍了如何在 Pyspark 中按列连接/附加多个 Spark 数据帧?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
如何使用 Pyspark 数据框做与 pd.concat([df1,df2],axis='columns') 等效的 Pandas?我用谷歌搜索,找不到好的解决方案.
How to do pandas equivalent of pd.concat([df1,df2],axis='columns') using Pyspark dataframes? I googled and couldn't find a good solution.
DF1
var1
3
4
5
DF2
var2 var3
23 31
44 45
52 53
Expected output dataframe
var1 var2 var3
3 23 31
4 44 45
5 52 53
编辑以包含预期输出
推荐答案
下面是你想要做的例子,但在 scala 中,我希望你能把它转换成 pyspark
Below is the example for what you want to do but in scala, I hope you can convert it to pyspark
val spark = SparkSession
.builder()
.master("local")
.appName("ParquetAppendMode")
.getOrCreate()
import spark.implicits._
val df1 = spark.sparkContext.parallelize(Seq(
(1, "abc"),
(2, "def"),
(3, "hij")
)).toDF("id", "name")
val df2 = spark.sparkContext.parallelize(Seq(
(19, "x"),
(29, "y"),
(39, "z")
)).toDF("age", "address")
val schema = StructType(df1.schema.fields ++ df2.schema.fields)
val df1df2 = df1.rdd.zip(df2.rdd).map{
case (rowLeft, rowRight) => Row.fromSeq(rowLeft.toSeq ++ rowRight.toSeq)}
spark.createDataFrame(df1df2, schema).show()
这就是你只使用数据框的方式
This is how you do only using dataframe
import org.apache.spark.sql.functions._
val ddf1 = df1.withColumn("row_id", monotonically_increasing_id())
val ddf2 = df2.withColumn("row_id", monotonically_increasing_id())
val result = ddf1.join(ddf2, Seq("row_id")).drop("row_id")
result.show()
将新列添加为 row_id
并使用 row_id
键将两个数据框连接起来.
add new column as row_id
and join both dataframe with key as row_id
.
希望这会有所帮助!
这篇关于如何在 Pyspark 中按列连接/附加多个 Spark 数据帧?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文