阿帕奇星火:JDBC连接不工作 [英] Apache Spark : JDBC connection not working

查看:414
本文介绍了阿帕奇星火:JDBC连接不工作的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我也问过这个问题,previously也,但没有得到任何答复(<一个href=\"http://stackoverflow.com/questions/29669420/not-able-to-connect-to-postgres-using-jdbc-in-pyspark-shell\">Not能够连接到使用pyspark外壳 JDBC)的Postgres。

我已经成功安装了星火1.3.0在我的本地Windows就跑示例程序​​使用pyspark外壳进行测试。

现在,我想从Mllib对存储在PostgreSQL中的数据相关性运行,但我不能够连接到PostgreSQL。

我已经成功地通过运行添加所需的jar在classpath(测试这个罐子)

  pyspark --jarsC:\\路径\\为\\罐\\ PostgreSQL相关9.2-1002.jdbc3.jar

我可以看到,罐子环境UI添加成功。

当我运行pyspark外壳下面 -

 从pyspark.sql进口SQLContext
sqlContext = SQLContext(SC)
DF = sqlContext.load(来源=JDBC,URL =的jdbc:在PostgreSQL:// [主持人] / [DBNAME],DBTABLE =[schema.table])

我得到这个错误 -

 &GT;&GT;&GT; DF = sqlContext.load(来源=JDBC,URL =的jdbc:在PostgreSQL:// [主持人] / [DBNAME],DBTABLE =[schema.table])
回溯(最近通话最后一个):
  文件&LT;&标准输入GT;,1号线,上述&lt;&模块GT;
  文件C:\\用户\\ ACERNEW3 \\桌面\\星火\\火花1.3.0彬hadoop2.4 \\ python的\\ pyspark \\ SQL \\ context.py,线路482,在负荷
    DF = self._ssql_ctx.load(源,joptions)
  文件\"C:\\Users\\ACERNEW3\\Desktop\\Spark\\spark-1.3.0-bin-hadoop2.4\\python\\lib\\py4j-0.8.2.1-src.zip\\py4j\\java_gateway.py\",线538,在__call__
  文件\"C:\\Users\\ACERNEW3\\Desktop\\Spark\\spark-1.3.0-bin-hadoop2.4\\python\\lib\\py4j-0.8.2.1-src.zip\\py4j\\protocol.py\",线300,在get_return_value
py4j.protocol.Py4JJavaError:同时呼吁o20.load发生错误。
:值java.sql.SQLException:PostgreSQL的:找到了JDBC没有合适的驱动程序// [主持人] / [DBNAME]
        在java.sql.DriverManager.getConnection(DriverManager.java:602)
        在java.sql.DriverManager.getConnection(DriverManager.java:207)
        在org.apache.spark.sql.jdbc.JDBCRDD $ .resolveTable(JDBCRDD.scala:94)
        在org.apache.spark.sql.jdbc.JDBCRelation&LT;&初始化GT; (JDBCRelation.scala:125)
        在org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:114)
        在org.apache.spark.sql.sources.ResolvedDataSource $。适用(ddl.scala:290)
        在org.apache.spark.sql.SQLContext.load(SQLContext.scala:679)
        在org.apache.spark.sql.SQLContext.load(SQLContext.scala:667)
        在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
        在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        在java.lang.reflect.Method.invoke(Method.java:597)
        在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        在py4j.Gateway.invoke(Gateway.java:259)
        在py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        在py4j.commands.CallCommand.execute(CallCommand.java:79)
        在py4j.GatewayConnection.run(GatewayConnection.java:207)
        在java.lang.Thread.run(Thread.java:619)


解决方案

我与MySQL / MariaDB的这个确切的问题,并获得重大线索由<一个href=\"http://stackoverflow.com/questions/30904182/pyspark-no-suitable-driver-found-for-jdbcmysql-dbhost\">this问题

所以,你pyspark命令应该是:

  pyspark --conf spark.executor.extraClassPath =&LT;&了jdbc.jar GT; --driver类路径&LT;&了jdbc.jar GT; --jars&LT;&了jdbc.jar GT; --master&LT;主网址&GT;

同时还要注意是否有错误时,pyspark开始像警告:本地罐子......不存在,跳过。和ERROR SparkContext:罐未发现在...,这可能意味着你的拼写错误路径

I have asked this question previously also but did not got any answer (Not able to connect to postgres using jdbc in pyspark shell).

I have successfully installed Spark 1.3.0 on my local windows and ran sample programs to test using pyspark shell.

Now, I want to run Correlations from Mllib on the data that is stored in Postgresql, but I am not able to connect to postgresql.

I have successfully added the required jar (tested this jar) in the classpath by running

pyspark --jars "C:\path\to\jar\postgresql-9.2-1002.jdbc3.jar"

I can see that jar is successfully added in environment UI.

When I run the following in pyspark shell-

from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.load(source="jdbc",url="jdbc:postgresql://[host]/[dbname]", dbtable="[schema.table]")  

I get this ERROR -

>>> df = sqlContext.load(source="jdbc",url="jdbc:postgresql://[host]/[dbname]", dbtable="[schema.table]")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\pyspark\sql\context.py", line 482, in load
    df = self._ssql_ctx.load(source, joptions)
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\java_gateway.py", line 538, in __call__
  File "C:\Users\ACERNEW3\Desktop\Spark\spark-1.3.0-bin-hadoop2.4\python\lib\py4j-0.8.2.1-src.zip\py4j\protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o20.load.
: java.sql.SQLException: No suitable driver found for     jdbc:postgresql://[host]/[dbname]
        at java.sql.DriverManager.getConnection(DriverManager.java:602)
        at java.sql.DriverManager.getConnection(DriverManager.java:207)
        at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:94)
        at org.apache.spark.sql.jdbc.JDBCRelation.<init>    (JDBCRelation.scala:125)
        at  org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:114)
        at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:290)
        at org.apache.spark.sql.SQLContext.load(SQLContext.scala:679)
        at org.apache.spark.sql.SQLContext.load(SQLContext.scala:667)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:619)

解决方案

I had this exact problem with mysql/mariadb, and got BIG clue from this question

So your pyspark command should be:

pyspark --conf spark.executor.extraClassPath=<jdbc.jar> --driver-class-path <jdbc.jar> --jars <jdbc.jar> --master <master-URL>

Also watch for errors when pyspark start like "Warning: Local jar ... does not exist, skipping." and "ERROR SparkContext: Jar not found at ...", these probably mean you spelled the path wrong.

这篇关于阿帕奇星火:JDBC连接不工作的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆