在没有公共列的情况下连接两个数据框 [英] Joining two dataframes without a common column

查看:39
本文介绍了在没有公共列的情况下连接两个数据框的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有两个数据框,它们具有不同类型的列.我需要加入这两个不同的数据框.请参考下面的例子

I have two dataframes which has different types of columns. I need to join those two different dataframe. Please refer the below example

val df1 has
Customer_name 
Customer_phone
Customer_age

val df2 has
Order_name
Order_ID

这两个数据框没有任何公共列.两个数据框中的行数和列数也不同.我尝试插入一个新的虚拟列来增加 row_index 值,如下所示val dfr=df1.withColumn("row_index",monotonically_increasing_id()).

These two dataframe doesn't have any common column. Number of rows and Number of columns in the two dataframes also differs. I tried to insert a new dummy column to increase the row_index value as below val dfr=df1.withColumn("row_index",monotonically_increasing_id()).

但由于我使用的是 Spark 2,因此不支持 monotonically_increasing_id 方法.有什么办法可以连接两个数据框,这样我就可以在一张excel文件中创建两个数据框的值.

But as i am using Spark 2, monotonically_increasing_id method is not supported. Is there any way to join two dataframe, so that I can create the value of two dataframe in a single sheet of excel file.

例如

val df1:
Customer_name  Customer_phone  Customer_age
karti           9685684551     24      
raja            8595456552     22

val df2:
Order_name Order_ID
watch       1
cattoy     2

我最终的 excel 表应该是这样的:

My final excel sheet should be like this:

Customer_name  Customer_phone  Customer_age   Order_name  Order_ID

karti          9685684551      24             watch        1
   
raja           8595456552      22             cattoy      2

推荐答案

monotonically_increasing_id()increasing唯一但不是连续.

monotonically_increasing_id() is increasing and unique but not consecutive.

您可以通过转换为 rdd 并为两个 dataframe 使用相同模式重建 Dataframe 来使用 zipWithIndex.

You can use zipWithIndex by converting to rdd and reconstructing Dataframe with the same schema for both dataframe.

import spark.implicits._


val df1 = Seq(
  ("karti", "9685684551", 24),
  ("raja", "8595456552", 22)
).toDF("Customer_name", "Customer_phone", "Customer_age")


val df2 = Seq(
  ("watch", 1),
  ("cattoy", 2)
).toDF("Order_name", "Order_ID")

val df11 = spark.sqlContext.createDataFrame(
  df1.rdd.zipWithIndex.map {
    case (row, index) => Row.fromSeq(row.toSeq :+ index)
  },
  // Create schema for index column
  StructType(df1.schema.fields :+ StructField("index", LongType, false))
)


val df22 = spark.sqlContext.createDataFrame(
  df2.rdd.zipWithIndex.map {
    case (row, index) => Row.fromSeq(row.toSeq :+ index)
  },
  // Create schema for index column
  StructType(df2.schema.fields :+ StructField("index", LongType, false))
)

现在加入最终的数据帧

df11.join(df22, Seq("index")).drop("index")

输出:

+-------------+--------------+------------+----------+--------+
|Customer_name|Customer_phone|Customer_age|Order_name|Order_ID|
+-------------+--------------+------------+----------+--------+
|karti        |9685684551    |24          |watch     |1       |
|raja         |8595456552    |22          |cattoy    |2       |
+-------------+--------------+------------+----------+--------+

这篇关于在没有公共列的情况下连接两个数据框的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆