在关键点上连接Spark数据框 [英] Joining Spark dataframes on the key

查看:73
本文介绍了在关键点上连接Spark数据框的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经构造了两个数据框.我们如何加入多个Spark数据帧?

I have constructed two dataframes. How can we join multiple Spark dataframes ?

例如:

PersonDfProfileDf,公用列为personId作为(键). 现在我们如何有一个结合PersonDfProfileDf的数据框?

PersonDf, ProfileDf with a common column as personId as (key). Now how can we have one Dataframe combining PersonDf and ProfileDf?

推荐答案

使用Scala的别名方法(这是旧版本的示例spark for spark 2.x,请参阅我的其他答案):

您可以使用案例类来准备样本数据集... 对于ex是可选的:您也可以从hiveContext.sql获取DataFrame.

Alias Approach using scala (this is example given for older version of spark for spark 2.x see my other answer) :

You can use case class to prepare sample dataset ... which is optional for ex: you can get DataFrame from hiveContext.sql as well..

import org.apache.spark.sql.functions.col

case class Person(name: String, age: Int, personid : Int)

case class Profile(name: String, personid  : Int , profileDescription: String)

    val df1 = sqlContext.createDataFrame(
   Person("Bindu",20,  2) 
:: Person("Raphel",25, 5) 
:: Person("Ram",40, 9):: Nil)


val df2 = sqlContext.createDataFrame(
Profile("Spark",2,  "SparkSQLMaster") 
:: Profile("Spark",5, "SparkGuru") 
:: Profile("Spark",9, "DevHunter"):: Nil
)

// you can do alias to refer column name with aliases to  increase readablity

val df_asPerson = df1.as("dfperson")
val df_asProfile = df2.as("dfprofile")


val joined_df = df_asPerson.join(
    df_asProfile
, col("dfperson.personid") === col("dfprofile.personid")
, "inner")


joined_df.select(
  col("dfperson.name")
, col("dfperson.age")
, col("dfprofile.name")
, col("dfprofile.profileDescription"))
.show

我个人不喜欢的示例Temp表方法...

sample Temp table approach which I don't like personally...

对DataFrame使用registerTempTable( tableName )方法的原因是,除了能够使用Spark提供的之外您也可以通过sqlContext.sql( sqlQuery )方法发出SQL查询,该查询使用该DataFrame作为SQL表. tableName参数指定在SQL查询中用于该DataFrame的表名.

The reason to use the registerTempTable( tableName ) method for a DataFrame, is so that in addition to being able to use the Spark-provided methods of a DataFrame, you can also issue SQL queries via the sqlContext.sql( sqlQuery ) method, that use that DataFrame as an SQL table. The tableName parameter specifies the table name to use for that DataFrame in the SQL queries.

df_asPerson.registerTempTable("dfperson");
df_asProfile.registerTempTable("dfprofile")

sqlContext.sql("""SELECT dfperson.name, dfperson.age, dfprofile.profileDescription
                  FROM  dfperson JOIN  dfprofile
                  ON dfperson.personid == dfprofile.personid""")

如果您想了解更多有关joins pls的信息,请参见以下文章:

If you want to know more about joins pls see this nice post : beyond-traditional-join-with-apache-spark

注意: 1)如 @RaphaelRoth 所述,

Note : 1) As mentioned by @RaphaelRoth ,

val resultDf = PersonDf.join(ProfileDf,Seq("personId"))很好 方法,因为如果您对同一张表使用内部联接,则从两侧都没有重复的列.
2)Spark 2.x示例在另一个答案中已更新,具有完整的联接集 带有示例+结果的spark 2.x支持的操作

val resultDf = PersonDf.join(ProfileDf,Seq("personId")) is good approach since it doesnt have duplicate columns from both sides if you are using inner join with same table.
2) Spark 2.x example updated in another answer with full set of join operations supported by spark 2.x with examples + result

提示:

此外,连接中的重要事项:广播功能可以帮助提示,请看我的答案

这篇关于在关键点上连接Spark数据框的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆