在没有硬编码连接条件的情况下动态连接多列上的两个 spark-scala 数据帧 [英] dynamically join two spark-scala dataframes on multiple columns without hardcoding join conditions

查看:15
本文介绍了在没有硬编码连接条件的情况下动态连接多列上的两个 spark-scala 数据帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想在多个列上动态加入两个 spark-scala 数据框.我会避免硬编码列名比较,如下面的陈述所示;

I would like to join two spark-scala dataframes on multiple columns dynamically. I would to avoid hard coding column name comparison as shown in the following statments;

val joinRes = df1.join(df2, df1("col1") == df2("col1") and df1("col2") == df2("col2"))

此查询的解决方案已存在于pyspark版本中--在以下链接中提供PySpark DataFrame - 动态加入多列

The solution for this query already exists in pyspark version --provided in the following link PySpark DataFrame - Join on multiple columns dynamically

我想使用 spark-scala 编写相同的代码

I would like to code the same code using spark-scala

推荐答案

在 scala 中,您可以像在 python 中那样做,但需要使用 map 和 reduce 函数:

In scala you do it in similar way like in python but you need to use map and reduce functions:

val sparkSession = SparkSession.builder().getOrCreate()
import sparkSession.implicits._

val df1 = List("a,b", "b,c", "c,d").toDF("col1","col2")
val df2 = List("1,2", "2,c", "3,4").toDF("col1","col2")

val columnsdf1 = df1.columns
val columnsdf2 = df2.columns

val joinExprs = columnsdf1
   .zip(columnsdf2)
   .map{case (c1, c2) => df1(c1) === df2(c2)}
   .reduce(_ && _)

val dfJoinRes = df1.join(df2,joinExprs)

这篇关于在没有硬编码连接条件的情况下动态连接多列上的两个 spark-scala 数据帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆