使用/不使用Spark SQL联接两个普通的RDD [英] Join two ordinary RDDs with/without Spark SQL
问题描述
我需要在一个/多个列上加入两个普通的RDDs
.从逻辑上讲,此操作等效于两个表的数据库联接操作.我想知道是否只有通过Spark SQL
才能做到这一点,或者还有其他方法可以做到这一点.
I need to join two ordinary RDDs
on one/more columns. Logically this operation is equivalent to the database join operation of two tables. I wonder if this is possible only through Spark SQL
or there are other ways of doing it.
作为一个具体的例子,考虑
使用主键ITEM_ID
的RDD r1
:
As a concrete example, consider
RDD r1
with primary key ITEM_ID
:
(ITEM_ID, ITEM_NAME, ITEM_UNIT, COMPANY_ID)
和带有主键COMPANY_ID
的RDD r2
:
and RDD r2
with primary key COMPANY_ID
:
(COMPANY_ID, COMPANY_NAME, COMPANY_CITY)
我想加入r1
和r2
.
这怎么办?
推荐答案
Soumya Simanta给出了很好的答案.但是,联接的RDD中的值是Iterable
,因此结果可能与普通表联接不太相似.
Soumya Simanta gave a good answer. However, the values in joined RDD are Iterable
, so the results may not be very similar to ordinary table joining.
或者,您可以:
val mappedItems = items.map(item => (item.companyId, item))
val mappedComp = companies.map(comp => (comp.companyId, comp))
mappedItems.join(mappedComp).take(10).foreach(println)
输出为:
(c1,(Item(1,first,2,c1),Company(c1,company-1,city-1)))
(c1,(Item(2,second,2,c1),Company(c1,company-1,city-1)))
(c2,(Item(3,third,2,c2),Company(c2,company-2,city-2)))
这篇关于使用/不使用Spark SQL联接两个普通的RDD的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!