如何使用Spark在MySQL(JDBC)上执行联接? [英] How to perform join on MySQL (JDBC) with Spark?
本文介绍了如何使用Spark在MySQL(JDBC)上执行联接?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我想通过Spark从MySQL读取数据. 我看到的API能够从特定表中读取数据. 像
I would like to read data from MySQL through Spark. The API which I saw is able to read data from specific table. something like,
val prop = new java.util.Properties
prop.setProperty("user", "<username>")
prop.setProperty("password", "<password>")
sparkSession.read.jdbc("jdbc:mysql://????:3306/???", "some-table", prop)
现在,我想对联接表执行查询.有人知道如何做到这一点(在数据库方面,而不是Spark SQL)?
Now, I would like to perform a query for join tables. Does anyone know how to do it (on the database side, not with Spark SQL) ?
谢谢
伊朗
推荐答案
您将需要使用"table"参数作为查询:
You'll need to use the "table " argument as a query:
val table = "(SELECT foo JOIN bar ON foo.id = bar.id) as t"
spark.read.jdbc("jdbc:mysql://????:3306/???", table, prop)
您应注意,为查询提供别名很重要,否则将无法正常工作.
You should note that giving an alias to your query is important or this won't work.
这篇关于如何使用Spark在MySQL(JDBC)上执行联接?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
查看全文